I0219 23:38:44.623320 9 test_context.go:416] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0219 23:38:44.624673 9 e2e.go:109] Starting e2e run "321171ef-a53d-4d69-8048-69a97ebb2fc5" on Ginkgo node 1 {"msg":"Test Suite starting","total":280,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1582155522 - Will randomize all specs Will run 280 of 4845 specs Feb 19 23:38:44.692: INFO: >>> kubeConfig: /root/.kube/config Feb 19 23:38:44.696: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 19 23:38:44.728: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 19 23:38:44.783: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 19 23:38:44.783: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 19 23:38:44.783: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 19 23:38:44.795: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 19 23:38:44.796: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 19 23:38:44.796: INFO: e2e test version: v1.18.0-alpha.2.152+426b3538900329 Feb 19 23:38:44.798: INFO: kube-apiserver version: v1.17.0 Feb 19 23:38:44.798: INFO: >>> kubeConfig: /root/.kube/config Feb 19 23:38:44.808: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:38:44.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Feb 19 23:38:44.883: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-map-47d24082-87c5-4521-9188-82c777769758 STEP: Creating a pod to test consume secrets Feb 19 23:38:44.907: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-336b7b9a-41ef-4213-a944-85f24a1672db" in namespace "projected-4654" to be "success or failure" Feb 19 23:38:44.978: INFO: Pod "pod-projected-secrets-336b7b9a-41ef-4213-a944-85f24a1672db": Phase="Pending", Reason="", readiness=false. Elapsed: 70.924206ms Feb 19 23:38:46.986: INFO: Pod "pod-projected-secrets-336b7b9a-41ef-4213-a944-85f24a1672db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079344643s Feb 19 23:38:48.993: INFO: Pod "pod-projected-secrets-336b7b9a-41ef-4213-a944-85f24a1672db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086088887s Feb 19 23:38:50.998: INFO: Pod "pod-projected-secrets-336b7b9a-41ef-4213-a944-85f24a1672db": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090899344s Feb 19 23:38:53.006: INFO: Pod "pod-projected-secrets-336b7b9a-41ef-4213-a944-85f24a1672db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.099392076s STEP: Saw pod success Feb 19 23:38:53.006: INFO: Pod "pod-projected-secrets-336b7b9a-41ef-4213-a944-85f24a1672db" satisfied condition "success or failure" Feb 19 23:38:53.010: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-336b7b9a-41ef-4213-a944-85f24a1672db container projected-secret-volume-test: STEP: delete the pod Feb 19 23:38:53.081: INFO: Waiting for pod pod-projected-secrets-336b7b9a-41ef-4213-a944-85f24a1672db to disappear Feb 19 23:38:53.093: INFO: Pod pod-projected-secrets-336b7b9a-41ef-4213-a944-85f24a1672db no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:38:53.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4654" for this suite. • [SLOW TEST:8.293 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":1,"skipped":34,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:38:53.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-5b0de5ae-830b-4fa1-a046-d81adde9da87 STEP: Creating a pod to test consume configMaps Feb 19 23:38:53.230: INFO: Waiting up to 5m0s for pod "pod-configmaps-c990ca7d-932b-49f5-92c6-89cd594d4562" in namespace "configmap-3390" to be "success or failure" Feb 19 23:38:53.329: INFO: Pod "pod-configmaps-c990ca7d-932b-49f5-92c6-89cd594d4562": Phase="Pending", Reason="", readiness=false. Elapsed: 99.060861ms Feb 19 23:38:55.337: INFO: Pod "pod-configmaps-c990ca7d-932b-49f5-92c6-89cd594d4562": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106852038s Feb 19 23:38:57.345: INFO: Pod "pod-configmaps-c990ca7d-932b-49f5-92c6-89cd594d4562": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115377563s Feb 19 23:38:59.355: INFO: Pod "pod-configmaps-c990ca7d-932b-49f5-92c6-89cd594d4562": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125354604s Feb 19 23:39:01.364: INFO: Pod "pod-configmaps-c990ca7d-932b-49f5-92c6-89cd594d4562": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.134250728s STEP: Saw pod success Feb 19 23:39:01.364: INFO: Pod "pod-configmaps-c990ca7d-932b-49f5-92c6-89cd594d4562" satisfied condition "success or failure" Feb 19 23:39:01.368: INFO: Trying to get logs from node jerma-node pod pod-configmaps-c990ca7d-932b-49f5-92c6-89cd594d4562 container configmap-volume-test: STEP: delete the pod Feb 19 23:39:02.247: INFO: Waiting for pod pod-configmaps-c990ca7d-932b-49f5-92c6-89cd594d4562 to disappear Feb 19 23:39:02.285: INFO: Pod pod-configmaps-c990ca7d-932b-49f5-92c6-89cd594d4562 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:39:02.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3390" for this suite. • [SLOW TEST:9.200 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":280,"completed":2,"skipped":72,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:39:02.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 19 23:39:10.517: INFO: Waiting up to 5m0s for pod "client-envvars-9a8b2655-404a-4e71-82d7-313708bb0078" in namespace "pods-5366" to be "success or failure" Feb 19 23:39:10.537: INFO: Pod "client-envvars-9a8b2655-404a-4e71-82d7-313708bb0078": Phase="Pending", Reason="", readiness=false. Elapsed: 20.085467ms Feb 19 23:39:12.555: INFO: Pod "client-envvars-9a8b2655-404a-4e71-82d7-313708bb0078": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037720637s Feb 19 23:39:14.570: INFO: Pod "client-envvars-9a8b2655-404a-4e71-82d7-313708bb0078": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052901284s Feb 19 23:39:16.580: INFO: Pod "client-envvars-9a8b2655-404a-4e71-82d7-313708bb0078": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062790367s Feb 19 23:39:18.586: INFO: Pod "client-envvars-9a8b2655-404a-4e71-82d7-313708bb0078": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069074107s STEP: Saw pod success Feb 19 23:39:18.586: INFO: Pod "client-envvars-9a8b2655-404a-4e71-82d7-313708bb0078" satisfied condition "success or failure" Feb 19 23:39:18.590: INFO: Trying to get logs from node jerma-node pod client-envvars-9a8b2655-404a-4e71-82d7-313708bb0078 container env3cont: STEP: delete the pod Feb 19 23:39:18.676: INFO: Waiting for pod client-envvars-9a8b2655-404a-4e71-82d7-313708bb0078 to disappear Feb 19 23:39:18.683: INFO: Pod client-envvars-9a8b2655-404a-4e71-82d7-313708bb0078 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:39:18.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5366" for this suite. • [SLOW TEST:16.390 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":280,"completed":3,"skipped":83,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:39:18.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: getting the auto-created API token STEP: reading a file in the container Feb 19 23:39:29.478: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5114 pod-service-account-3aa66fee-8c91-41d9-a85f-709064017cb4 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Feb 19 23:39:32.288: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5114 pod-service-account-3aa66fee-8c91-41d9-a85f-709064017cb4 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Feb 19 23:39:32.621: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5114 pod-service-account-3aa66fee-8c91-41d9-a85f-709064017cb4 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:39:32.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5114" for this suite. • [SLOW TEST:14.599 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":280,"completed":4,"skipped":124,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:39:33.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 19 23:39:33.425: INFO: Waiting up to 5m0s for pod "pod-e717d657-3725-41df-8157-7db0e2df4687" in namespace "emptydir-138" to be "success or failure" Feb 19 23:39:33.452: INFO: Pod "pod-e717d657-3725-41df-8157-7db0e2df4687": Phase="Pending", Reason="", readiness=false. Elapsed: 26.847482ms Feb 19 23:39:35.462: INFO: Pod "pod-e717d657-3725-41df-8157-7db0e2df4687": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037035194s Feb 19 23:39:37.550: INFO: Pod "pod-e717d657-3725-41df-8157-7db0e2df4687": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125237957s Feb 19 23:39:39.557: INFO: Pod "pod-e717d657-3725-41df-8157-7db0e2df4687": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131331383s Feb 19 23:39:41.588: INFO: Pod "pod-e717d657-3725-41df-8157-7db0e2df4687": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.162922643s STEP: Saw pod success Feb 19 23:39:41.588: INFO: Pod "pod-e717d657-3725-41df-8157-7db0e2df4687" satisfied condition "success or failure" Feb 19 23:39:41.593: INFO: Trying to get logs from node jerma-node pod pod-e717d657-3725-41df-8157-7db0e2df4687 container test-container: STEP: delete the pod Feb 19 23:39:41.751: INFO: Waiting for pod pod-e717d657-3725-41df-8157-7db0e2df4687 to disappear Feb 19 23:39:41.760: INFO: Pod pod-e717d657-3725-41df-8157-7db0e2df4687 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:39:41.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-138" for this suite. • [SLOW TEST:8.475 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":5,"skipped":130,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:39:41.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 19 23:39:41.920: INFO: Waiting up to 5m0s for pod "pod-7da15d4a-9f9b-42f7-aed3-029448b9de59" in namespace "emptydir-5923" to be "success or failure" Feb 19 23:39:42.026: INFO: Pod "pod-7da15d4a-9f9b-42f7-aed3-029448b9de59": Phase="Pending", Reason="", readiness=false. Elapsed: 106.028512ms Feb 19 23:39:44.033: INFO: Pod "pod-7da15d4a-9f9b-42f7-aed3-029448b9de59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113677556s Feb 19 23:39:46.057: INFO: Pod "pod-7da15d4a-9f9b-42f7-aed3-029448b9de59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137747183s Feb 19 23:39:48.062: INFO: Pod "pod-7da15d4a-9f9b-42f7-aed3-029448b9de59": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142385073s Feb 19 23:39:50.095: INFO: Pod "pod-7da15d4a-9f9b-42f7-aed3-029448b9de59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.175099077s STEP: Saw pod success Feb 19 23:39:50.095: INFO: Pod "pod-7da15d4a-9f9b-42f7-aed3-029448b9de59" satisfied condition "success or failure" Feb 19 23:39:50.099: INFO: Trying to get logs from node jerma-node pod pod-7da15d4a-9f9b-42f7-aed3-029448b9de59 container test-container: STEP: delete the pod Feb 19 23:39:50.171: INFO: Waiting for pod pod-7da15d4a-9f9b-42f7-aed3-029448b9de59 to disappear Feb 19 23:39:50.177: INFO: Pod pod-7da15d4a-9f9b-42f7-aed3-029448b9de59 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:39:50.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5923" for this suite. • [SLOW TEST:8.419 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":6,"skipped":137,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:39:50.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 19 23:39:51.071: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 19 23:39:53.099: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752391, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752391, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752391, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752390, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 19 23:39:55.106: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752391, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752391, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752391, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752390, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 19 23:39:57.112: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752391, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752391, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752391, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752390, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 19 23:39:59.107: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752391, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752391, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752391, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752390, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 19 23:40:02.136: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:40:03.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7220" for this suite. STEP: Destroying namespace "webhook-7220-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:12.979 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":280,"completed":7,"skipped":171,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:40:03.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 19 23:40:04.711: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 19 23:40:06.722: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752404, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752404, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752404, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752404, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 19 23:40:08.742: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752404, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752404, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752404, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752404, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 19 23:40:10.730: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752404, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752404, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752404, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752404, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 19 23:40:13.790: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Feb 19 23:40:13.840: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:40:13.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2523" for this suite. STEP: Destroying namespace "webhook-2523-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:10.869 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":280,"completed":8,"skipped":171,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:40:14.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Feb 19 23:40:22.257: INFO: &Pod{ObjectMeta:{send-events-a776285f-247e-4957-b069-fc1f2f74a6cf events-1873 /api/v1/namespaces/events-1873/pods/send-events-a776285f-247e-4957-b069-fc1f2f74a6cf 25c17b0b-2c8f-43b2-a158-4f4e14fb3df3 9490498 0 2020-02-19 23:40:14 +0000 UTC map[name:foo time:218145830] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4cdx4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4cdx4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4cdx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-19 23:40:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-19 23:40:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-19 23:40:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-19 23:40:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-19 23:40:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-19 23:40:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://4e53204988905c439c3890ffd32e25c94effc069063b027f96df573fe46dbf56,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Feb 19 23:40:24.267: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Feb 19 23:40:26.276: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:40:26.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1873" for this suite. • [SLOW TEST:12.291 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":280,"completed":9,"skipped":206,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:40:26.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-map-5c8f3cea-9376-4b5e-b9e0-0fde71e51625 STEP: Creating a pod to test consume secrets Feb 19 23:40:26.491: INFO: Waiting up to 5m0s for pod "pod-secrets-e328fd45-37f1-4097-8360-22dbc2f8f54a" in namespace "secrets-9207" to be "success or failure" Feb 19 23:40:26.500: INFO: Pod "pod-secrets-e328fd45-37f1-4097-8360-22dbc2f8f54a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.206051ms Feb 19 23:40:28.512: INFO: Pod "pod-secrets-e328fd45-37f1-4097-8360-22dbc2f8f54a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020901076s Feb 19 23:40:30.524: INFO: Pod "pod-secrets-e328fd45-37f1-4097-8360-22dbc2f8f54a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032580062s Feb 19 23:40:32.536: INFO: Pod "pod-secrets-e328fd45-37f1-4097-8360-22dbc2f8f54a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044731241s Feb 19 23:40:34.549: INFO: Pod "pod-secrets-e328fd45-37f1-4097-8360-22dbc2f8f54a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057975391s Feb 19 23:40:36.563: INFO: Pod "pod-secrets-e328fd45-37f1-4097-8360-22dbc2f8f54a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071511681s STEP: Saw pod success Feb 19 23:40:36.563: INFO: Pod "pod-secrets-e328fd45-37f1-4097-8360-22dbc2f8f54a" satisfied condition "success or failure" Feb 19 23:40:36.568: INFO: Trying to get logs from node jerma-node pod pod-secrets-e328fd45-37f1-4097-8360-22dbc2f8f54a container secret-volume-test: STEP: delete the pod Feb 19 23:40:36.623: INFO: Waiting for pod pod-secrets-e328fd45-37f1-4097-8360-22dbc2f8f54a to disappear Feb 19 23:40:36.628: INFO: Pod pod-secrets-e328fd45-37f1-4097-8360-22dbc2f8f54a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:40:36.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9207" for this suite. • [SLOW TEST:10.300 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":10,"skipped":212,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:40:36.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 19 23:40:36.821: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:40:36.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2823" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":280,"completed":11,"skipped":220,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:40:36.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Feb 19 23:40:37.073: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 19 23:40:37.085: INFO: Waiting for terminating namespaces to be deleted... Feb 19 23:40:37.088: INFO: Logging pods the kubelet thinks is on node jerma-node before test Feb 19 23:40:37.094: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Feb 19 23:40:37.094: INFO: Container kube-proxy ready: true, restart count 0 Feb 19 23:40:37.094: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Feb 19 23:40:37.094: INFO: Container weave ready: true, restart count 1 Feb 19 23:40:37.094: INFO: Container weave-npc ready: true, restart count 0 Feb 19 23:40:37.094: INFO: send-events-a776285f-247e-4957-b069-fc1f2f74a6cf from events-1873 started at 2020-02-19 23:40:14 +0000 UTC (1 container statuses recorded) Feb 19 23:40:37.094: INFO: Container p ready: true, restart count 0 Feb 19 23:40:37.094: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Feb 19 23:40:37.109: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 19 23:40:37.109: INFO: Container kube-controller-manager ready: true, restart count 14 Feb 19 23:40:37.109: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Feb 19 23:40:37.109: INFO: Container kube-proxy ready: true, restart count 0 Feb 19 23:40:37.109: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Feb 19 23:40:37.109: INFO: Container weave ready: true, restart count 0 Feb 19 23:40:37.109: INFO: Container weave-npc ready: true, restart count 0 Feb 19 23:40:37.109: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 19 23:40:37.109: INFO: Container kube-scheduler ready: true, restart count 18 Feb 19 23:40:37.109: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 19 23:40:37.109: INFO: Container kube-apiserver ready: true, restart count 1 Feb 19 23:40:37.109: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 19 23:40:37.109: INFO: Container etcd ready: true, restart count 1 Feb 19 23:40:37.109: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 19 23:40:37.109: INFO: Container coredns ready: true, restart count 0 Feb 19 23:40:37.109: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 19 23:40:37.109: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-2fc8b02c-24ae-4f7c-ba90-79498f9c92dc 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-2fc8b02c-24ae-4f7c-ba90-79498f9c92dc off the node jerma-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-2fc8b02c-24ae-4f7c-ba90-79498f9c92dc [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:40:51.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6407" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 • [SLOW TEST:14.543 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":280,"completed":12,"skipped":221,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:40:51.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:41:28.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5330" for this suite. STEP: Destroying namespace "nsdeletetest-8499" for this suite. Feb 19 23:41:28.915: INFO: Namespace nsdeletetest-8499 was already deleted STEP: Destroying namespace "nsdeletetest-20" for this suite. • [SLOW TEST:37.435 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":280,"completed":13,"skipped":265,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:41:28.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1863 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 19 23:41:29.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3564' Feb 19 23:41:29.200: INFO: stderr: "" Feb 19 23:41:29.200: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1868 Feb 19 23:41:29.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3564' Feb 19 23:41:34.089: INFO: stderr: "" Feb 19 23:41:34.089: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:41:34.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3564" for this suite. • [SLOW TEST:5.177 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1859 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":280,"completed":14,"skipped":266,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:41:34.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 19 23:41:34.414: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a623d9dc-2337-4525-886e-bda096433e61" in namespace "projected-4687" to be "success or failure" Feb 19 23:41:34.473: INFO: Pod "downwardapi-volume-a623d9dc-2337-4525-886e-bda096433e61": Phase="Pending", Reason="", readiness=false. Elapsed: 58.796175ms Feb 19 23:41:36.483: INFO: Pod "downwardapi-volume-a623d9dc-2337-4525-886e-bda096433e61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068887042s Feb 19 23:41:38.499: INFO: Pod "downwardapi-volume-a623d9dc-2337-4525-886e-bda096433e61": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084098457s Feb 19 23:41:40.512: INFO: Pod "downwardapi-volume-a623d9dc-2337-4525-886e-bda096433e61": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097091802s Feb 19 23:41:42.521: INFO: Pod "downwardapi-volume-a623d9dc-2337-4525-886e-bda096433e61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.106089313s STEP: Saw pod success Feb 19 23:41:42.521: INFO: Pod "downwardapi-volume-a623d9dc-2337-4525-886e-bda096433e61" satisfied condition "success or failure" Feb 19 23:41:42.524: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-a623d9dc-2337-4525-886e-bda096433e61 container client-container: STEP: delete the pod Feb 19 23:41:42.710: INFO: Waiting for pod downwardapi-volume-a623d9dc-2337-4525-886e-bda096433e61 to disappear Feb 19 23:41:42.731: INFO: Pod downwardapi-volume-a623d9dc-2337-4525-886e-bda096433e61 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:41:42.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4687" for this suite. • [SLOW TEST:8.642 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":280,"completed":15,"skipped":314,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:41:42.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 19 23:41:42.954: INFO: Waiting up to 5m0s for pod "pod-b1e6b9f3-6b4f-4178-b8d3-87efa0de51d3" in namespace "emptydir-9292" to be "success or failure" Feb 19 23:41:43.054: INFO: Pod "pod-b1e6b9f3-6b4f-4178-b8d3-87efa0de51d3": Phase="Pending", Reason="", readiness=false. Elapsed: 100.272034ms Feb 19 23:41:45.061: INFO: Pod "pod-b1e6b9f3-6b4f-4178-b8d3-87efa0de51d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106480633s Feb 19 23:41:47.066: INFO: Pod "pod-b1e6b9f3-6b4f-4178-b8d3-87efa0de51d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111806981s Feb 19 23:41:49.072: INFO: Pod "pod-b1e6b9f3-6b4f-4178-b8d3-87efa0de51d3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117516731s Feb 19 23:41:51.079: INFO: Pod "pod-b1e6b9f3-6b4f-4178-b8d3-87efa0de51d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.124960387s STEP: Saw pod success Feb 19 23:41:51.079: INFO: Pod "pod-b1e6b9f3-6b4f-4178-b8d3-87efa0de51d3" satisfied condition "success or failure" Feb 19 23:41:51.083: INFO: Trying to get logs from node jerma-node pod pod-b1e6b9f3-6b4f-4178-b8d3-87efa0de51d3 container test-container: STEP: delete the pod Feb 19 23:41:51.123: INFO: Waiting for pod pod-b1e6b9f3-6b4f-4178-b8d3-87efa0de51d3 to disappear Feb 19 23:41:51.132: INFO: Pod pod-b1e6b9f3-6b4f-4178-b8d3-87efa0de51d3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:41:51.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9292" for this suite. • [SLOW TEST:8.372 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":16,"skipped":316,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:41:51.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 19 23:41:51.354: INFO: Waiting up to 5m0s for pod "pod-1cd25f77-4f0b-4f97-ac0b-2d440d5b13f4" in namespace "emptydir-5445" to be "success or failure" Feb 19 23:41:51.428: INFO: Pod "pod-1cd25f77-4f0b-4f97-ac0b-2d440d5b13f4": Phase="Pending", Reason="", readiness=false. Elapsed: 73.961396ms Feb 19 23:41:53.435: INFO: Pod "pod-1cd25f77-4f0b-4f97-ac0b-2d440d5b13f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081617712s Feb 19 23:41:55.443: INFO: Pod "pod-1cd25f77-4f0b-4f97-ac0b-2d440d5b13f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089424679s Feb 19 23:41:57.498: INFO: Pod "pod-1cd25f77-4f0b-4f97-ac0b-2d440d5b13f4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.143872771s Feb 19 23:41:59.505: INFO: Pod "pod-1cd25f77-4f0b-4f97-ac0b-2d440d5b13f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.151540303s STEP: Saw pod success Feb 19 23:41:59.505: INFO: Pod "pod-1cd25f77-4f0b-4f97-ac0b-2d440d5b13f4" satisfied condition "success or failure" Feb 19 23:41:59.509: INFO: Trying to get logs from node jerma-node pod pod-1cd25f77-4f0b-4f97-ac0b-2d440d5b13f4 container test-container: STEP: delete the pod Feb 19 23:41:59.717: INFO: Waiting for pod pod-1cd25f77-4f0b-4f97-ac0b-2d440d5b13f4 to disappear Feb 19 23:41:59.768: INFO: Pod pod-1cd25f77-4f0b-4f97-ac0b-2d440d5b13f4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:41:59.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5445" for this suite. • [SLOW TEST:8.642 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":17,"skipped":332,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:41:59.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-e432cfb4-f0ab-422d-9777-013fb3bb9d06 STEP: Creating a pod to test consume secrets Feb 19 23:42:00.130: INFO: Waiting up to 5m0s for pod "pod-secrets-34cb3a20-5825-4a22-ab32-3b5185f538cd" in namespace "secrets-5286" to be "success or failure" Feb 19 23:42:00.178: INFO: Pod "pod-secrets-34cb3a20-5825-4a22-ab32-3b5185f538cd": Phase="Pending", Reason="", readiness=false. Elapsed: 47.620508ms Feb 19 23:42:02.186: INFO: Pod "pod-secrets-34cb3a20-5825-4a22-ab32-3b5185f538cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056089097s Feb 19 23:42:04.195: INFO: Pod "pod-secrets-34cb3a20-5825-4a22-ab32-3b5185f538cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064219289s Feb 19 23:42:06.206: INFO: Pod "pod-secrets-34cb3a20-5825-4a22-ab32-3b5185f538cd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075615904s Feb 19 23:42:08.213: INFO: Pod "pod-secrets-34cb3a20-5825-4a22-ab32-3b5185f538cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.082556619s STEP: Saw pod success Feb 19 23:42:08.213: INFO: Pod "pod-secrets-34cb3a20-5825-4a22-ab32-3b5185f538cd" satisfied condition "success or failure" Feb 19 23:42:08.217: INFO: Trying to get logs from node jerma-node pod pod-secrets-34cb3a20-5825-4a22-ab32-3b5185f538cd container secret-volume-test: STEP: delete the pod Feb 19 23:42:08.405: INFO: Waiting for pod pod-secrets-34cb3a20-5825-4a22-ab32-3b5185f538cd to disappear Feb 19 23:42:08.414: INFO: Pod pod-secrets-34cb3a20-5825-4a22-ab32-3b5185f538cd no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:42:08.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5286" for this suite. • [SLOW TEST:8.639 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":280,"completed":18,"skipped":353,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:42:08.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 19 23:42:08.558: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:42:14.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6606" for this suite. • [SLOW TEST:6.073 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":280,"completed":19,"skipped":373,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:42:14.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name s-test-opt-del-4a0631e3-7658-4f2f-9067-851524ec92d5 STEP: Creating secret with name s-test-opt-upd-c2067e15-4c3c-483f-ad9e-3ecb317873cc STEP: Creating the pod STEP: Deleting secret s-test-opt-del-4a0631e3-7658-4f2f-9067-851524ec92d5 STEP: Updating secret s-test-opt-upd-c2067e15-4c3c-483f-ad9e-3ecb317873cc STEP: Creating secret with name s-test-opt-create-c7df4e51-dd83-4c92-a910-c6e730d8baee STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:43:44.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2772" for this suite. • [SLOW TEST:90.181 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":20,"skipped":396,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:43:44.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Feb 19 23:43:44.797: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1249 /api/v1/namespaces/watch-1249/configmaps/e2e-watch-test-watch-closed 0573e4b2-131b-48b9-9044-ca8506c0420c 9491317 0 2020-02-19 23:43:44 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 19 23:43:44.797: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1249 /api/v1/namespaces/watch-1249/configmaps/e2e-watch-test-watch-closed 0573e4b2-131b-48b9-9044-ca8506c0420c 9491318 0 2020-02-19 23:43:44 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Feb 19 23:43:44.813: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1249 /api/v1/namespaces/watch-1249/configmaps/e2e-watch-test-watch-closed 0573e4b2-131b-48b9-9044-ca8506c0420c 9491319 0 2020-02-19 23:43:44 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 19 23:43:44.813: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1249 /api/v1/namespaces/watch-1249/configmaps/e2e-watch-test-watch-closed 0573e4b2-131b-48b9-9044-ca8506c0420c 9491320 0 2020-02-19 23:43:44 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:43:44.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1249" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":280,"completed":21,"skipped":403,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:43:44.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-map-17bd4b79-932f-4abf-9e54-dcec72d79b0f STEP: Creating a pod to test consume configMaps Feb 19 23:43:44.938: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f347f952-1cbc-4290-8b56-6fb0c4e9c2b1" in namespace "projected-7434" to be "success or failure" Feb 19 23:43:44.944: INFO: Pod "pod-projected-configmaps-f347f952-1cbc-4290-8b56-6fb0c4e9c2b1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.5377ms Feb 19 23:43:46.953: INFO: Pod "pod-projected-configmaps-f347f952-1cbc-4290-8b56-6fb0c4e9c2b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014903805s Feb 19 23:43:48.959: INFO: Pod "pod-projected-configmaps-f347f952-1cbc-4290-8b56-6fb0c4e9c2b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020979102s Feb 19 23:43:50.966: INFO: Pod "pod-projected-configmaps-f347f952-1cbc-4290-8b56-6fb0c4e9c2b1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028000704s Feb 19 23:43:52.974: INFO: Pod "pod-projected-configmaps-f347f952-1cbc-4290-8b56-6fb0c4e9c2b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.035994063s STEP: Saw pod success Feb 19 23:43:52.974: INFO: Pod "pod-projected-configmaps-f347f952-1cbc-4290-8b56-6fb0c4e9c2b1" satisfied condition "success or failure" Feb 19 23:43:52.979: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-f347f952-1cbc-4290-8b56-6fb0c4e9c2b1 container projected-configmap-volume-test: STEP: delete the pod Feb 19 23:43:53.331: INFO: Waiting for pod pod-projected-configmaps-f347f952-1cbc-4290-8b56-6fb0c4e9c2b1 to disappear Feb 19 23:43:53.341: INFO: Pod pod-projected-configmaps-f347f952-1cbc-4290-8b56-6fb0c4e9c2b1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:43:53.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7434" for this suite. • [SLOW TEST:8.543 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":22,"skipped":414,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:43:53.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 19 23:43:54.402: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 19 23:43:56.420: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752634, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752634, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752634, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752634, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 19 23:43:58.430: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752634, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752634, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752634, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752634, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 19 23:44:00.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752634, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752634, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752634, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717752634, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 19 23:44:03.904: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:44:04.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2714" for this suite. STEP: Destroying namespace "webhook-2714-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:11.418 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":280,"completed":23,"skipped":418,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:44:04.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Feb 19 23:44:04.889: INFO: Waiting up to 5m0s for pod "downward-api-ea01abaa-b58b-40ac-8edd-217f23a75650" in namespace "downward-api-9904" to be "success or failure" Feb 19 23:44:04.894: INFO: Pod "downward-api-ea01abaa-b58b-40ac-8edd-217f23a75650": Phase="Pending", Reason="", readiness=false. Elapsed: 4.811558ms Feb 19 23:44:06.904: INFO: Pod "downward-api-ea01abaa-b58b-40ac-8edd-217f23a75650": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015511664s Feb 19 23:44:08.913: INFO: Pod "downward-api-ea01abaa-b58b-40ac-8edd-217f23a75650": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02395951s Feb 19 23:44:10.923: INFO: Pod "downward-api-ea01abaa-b58b-40ac-8edd-217f23a75650": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034390295s Feb 19 23:44:12.931: INFO: Pod "downward-api-ea01abaa-b58b-40ac-8edd-217f23a75650": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042715524s Feb 19 23:44:14.938: INFO: Pod "downward-api-ea01abaa-b58b-40ac-8edd-217f23a75650": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.048863807s STEP: Saw pod success Feb 19 23:44:14.938: INFO: Pod "downward-api-ea01abaa-b58b-40ac-8edd-217f23a75650" satisfied condition "success or failure" Feb 19 23:44:14.942: INFO: Trying to get logs from node jerma-node pod downward-api-ea01abaa-b58b-40ac-8edd-217f23a75650 container dapi-container: STEP: delete the pod Feb 19 23:44:15.194: INFO: Waiting for pod downward-api-ea01abaa-b58b-40ac-8edd-217f23a75650 to disappear Feb 19 23:44:15.202: INFO: Pod downward-api-ea01abaa-b58b-40ac-8edd-217f23a75650 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:44:15.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9904" for this suite. • [SLOW TEST:10.428 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":280,"completed":24,"skipped":421,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:44:15.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod liveness-896416d2-d5b9-4177-b65c-aed88bf9e356 in namespace container-probe-9001 Feb 19 23:44:21.337: INFO: Started pod liveness-896416d2-d5b9-4177-b65c-aed88bf9e356 in namespace container-probe-9001 STEP: checking the pod's current state and verifying that restartCount is present Feb 19 23:44:21.341: INFO: Initial restart count of pod liveness-896416d2-d5b9-4177-b65c-aed88bf9e356 is 0 Feb 19 23:44:41.458: INFO: Restart count of pod container-probe-9001/liveness-896416d2-d5b9-4177-b65c-aed88bf9e356 is now 1 (20.116293584s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:44:41.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9001" for this suite. • [SLOW TEST:26.321 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":280,"completed":25,"skipped":431,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:44:41.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:44:41.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-804" for this suite. STEP: Destroying namespace "nspatchtest-8319694b-3dc7-4b64-b345-bd3ec549234f-2028" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":280,"completed":26,"skipped":451,"failed":0} SSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:44:41.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 19 23:44:42.127: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:44:52.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7958" for this suite. • [SLOW TEST:10.294 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":280,"completed":27,"skipped":455,"failed":0} SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:44:52.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod test-webserver-a99ab608-e230-4c68-bad5-6fa384338a85 in namespace container-probe-8763 Feb 19 23:45:00.412: INFO: Started pod test-webserver-a99ab608-e230-4c68-bad5-6fa384338a85 in namespace container-probe-8763 STEP: checking the pod's current state and verifying that restartCount is present Feb 19 23:45:00.418: INFO: Initial restart count of pod test-webserver-a99ab608-e230-4c68-bad5-6fa384338a85 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:49:01.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8763" for this suite. • [SLOW TEST:249.538 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":280,"completed":28,"skipped":457,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:49:01.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-projected-all-test-volume-99599550-b452-4dbe-beef-fca18a6a3841 STEP: Creating secret with name secret-projected-all-test-volume-5db73a2d-f1c8-4e47-ab79-bfe9cf461bed STEP: Creating a pod to test Check all projections for projected volume plugin Feb 19 23:49:01.974: INFO: Waiting up to 5m0s for pod "projected-volume-d9e4406e-d687-4380-8a37-8af6ba3a8977" in namespace "projected-8528" to be "success or failure" Feb 19 23:49:02.054: INFO: Pod "projected-volume-d9e4406e-d687-4380-8a37-8af6ba3a8977": Phase="Pending", Reason="", readiness=false. Elapsed: 79.759544ms Feb 19 23:49:04.063: INFO: Pod "projected-volume-d9e4406e-d687-4380-8a37-8af6ba3a8977": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088969016s Feb 19 23:49:06.071: INFO: Pod "projected-volume-d9e4406e-d687-4380-8a37-8af6ba3a8977": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096533321s Feb 19 23:49:08.076: INFO: Pod "projected-volume-d9e4406e-d687-4380-8a37-8af6ba3a8977": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10225482s Feb 19 23:49:10.082: INFO: Pod "projected-volume-d9e4406e-d687-4380-8a37-8af6ba3a8977": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.10786982s STEP: Saw pod success Feb 19 23:49:10.082: INFO: Pod "projected-volume-d9e4406e-d687-4380-8a37-8af6ba3a8977" satisfied condition "success or failure" Feb 19 23:49:10.085: INFO: Trying to get logs from node jerma-node pod projected-volume-d9e4406e-d687-4380-8a37-8af6ba3a8977 container projected-all-volume-test: STEP: delete the pod Feb 19 23:49:10.150: INFO: Waiting for pod projected-volume-d9e4406e-d687-4380-8a37-8af6ba3a8977 to disappear Feb 19 23:49:10.155: INFO: Pod projected-volume-d9e4406e-d687-4380-8a37-8af6ba3a8977 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:49:10.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8528" for this suite. • [SLOW TEST:8.388 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":280,"completed":29,"skipped":464,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:49:10.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 19 23:49:10.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7816' Feb 19 23:49:10.612: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 19 23:49:10.612: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 Feb 19 23:49:10.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-7816' Feb 19 23:49:10.817: INFO: stderr: "" Feb 19 23:49:10.817: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:49:10.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7816" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":280,"completed":30,"skipped":514,"failed":0} ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:49:10.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 19 23:49:10.978: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-a4ee55d0-04e9-49a8-8dec-f40468464655" in namespace "security-context-test-671" to be "success or failure" Feb 19 23:49:11.097: INFO: Pod "busybox-privileged-false-a4ee55d0-04e9-49a8-8dec-f40468464655": Phase="Pending", Reason="", readiness=false. Elapsed: 118.919096ms Feb 19 23:49:13.104: INFO: Pod "busybox-privileged-false-a4ee55d0-04e9-49a8-8dec-f40468464655": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125490896s Feb 19 23:49:15.113: INFO: Pod "busybox-privileged-false-a4ee55d0-04e9-49a8-8dec-f40468464655": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133996837s Feb 19 23:49:17.120: INFO: Pod "busybox-privileged-false-a4ee55d0-04e9-49a8-8dec-f40468464655": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141536194s Feb 19 23:49:19.129: INFO: Pod "busybox-privileged-false-a4ee55d0-04e9-49a8-8dec-f40468464655": Phase="Pending", Reason="", readiness=false. Elapsed: 8.150690827s Feb 19 23:49:21.138: INFO: Pod "busybox-privileged-false-a4ee55d0-04e9-49a8-8dec-f40468464655": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.159461636s Feb 19 23:49:21.138: INFO: Pod "busybox-privileged-false-a4ee55d0-04e9-49a8-8dec-f40468464655" satisfied condition "success or failure" Feb 19 23:49:21.150: INFO: Got logs for pod "busybox-privileged-false-a4ee55d0-04e9-49a8-8dec-f40468464655": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:49:21.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-671" for this suite. • [SLOW TEST:10.327 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":31,"skipped":514,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:49:21.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name projected-secret-test-e68d0a25-14b0-46c3-b42e-545cf14a8c68 STEP: Creating a pod to test consume secrets Feb 19 23:49:21.282: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-02df09d4-89c0-49e9-b51c-07e97cff0616" in namespace "projected-9997" to be "success or failure" Feb 19 23:49:21.304: INFO: Pod "pod-projected-secrets-02df09d4-89c0-49e9-b51c-07e97cff0616": Phase="Pending", Reason="", readiness=false. Elapsed: 21.536929ms Feb 19 23:49:23.312: INFO: Pod "pod-projected-secrets-02df09d4-89c0-49e9-b51c-07e97cff0616": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029620813s Feb 19 23:49:25.321: INFO: Pod "pod-projected-secrets-02df09d4-89c0-49e9-b51c-07e97cff0616": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038106558s Feb 19 23:49:27.330: INFO: Pod "pod-projected-secrets-02df09d4-89c0-49e9-b51c-07e97cff0616": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047398473s Feb 19 23:49:29.348: INFO: Pod "pod-projected-secrets-02df09d4-89c0-49e9-b51c-07e97cff0616": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065020189s STEP: Saw pod success Feb 19 23:49:29.348: INFO: Pod "pod-projected-secrets-02df09d4-89c0-49e9-b51c-07e97cff0616" satisfied condition "success or failure" Feb 19 23:49:29.354: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-02df09d4-89c0-49e9-b51c-07e97cff0616 container secret-volume-test: STEP: delete the pod Feb 19 23:49:29.552: INFO: Waiting for pod pod-projected-secrets-02df09d4-89c0-49e9-b51c-07e97cff0616 to disappear Feb 19 23:49:29.560: INFO: Pod pod-projected-secrets-02df09d4-89c0-49e9-b51c-07e97cff0616 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:49:29.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9997" for this suite. • [SLOW TEST:8.417 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":280,"completed":32,"skipped":525,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:49:29.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1694 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 19 23:49:29.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-4720' Feb 19 23:49:30.134: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 19 23:49:30.134: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Feb 19 23:49:30.153: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Feb 19 23:49:30.237: INFO: scanned /root for discovery docs: Feb 19 23:49:30.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-4720' Feb 19 23:49:55.443: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 19 23:49:55.443: INFO: stdout: "Created e2e-test-httpd-rc-534c9d5619cd68d8c07ef24023a3ea32\nScaling up e2e-test-httpd-rc-534c9d5619cd68d8c07ef24023a3ea32 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-534c9d5619cd68d8c07ef24023a3ea32 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-534c9d5619cd68d8c07ef24023a3ea32 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Feb 19 23:49:55.443: INFO: stdout: "Created e2e-test-httpd-rc-534c9d5619cd68d8c07ef24023a3ea32\nScaling up e2e-test-httpd-rc-534c9d5619cd68d8c07ef24023a3ea32 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-534c9d5619cd68d8c07ef24023a3ea32 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-534c9d5619cd68d8c07ef24023a3ea32 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Feb 19 23:49:55.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-4720' Feb 19 23:49:55.553: INFO: stderr: "" Feb 19 23:49:55.553: INFO: stdout: "e2e-test-httpd-rc-534c9d5619cd68d8c07ef24023a3ea32-9k8gj " Feb 19 23:49:55.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-534c9d5619cd68d8c07ef24023a3ea32-9k8gj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4720' Feb 19 23:49:55.785: INFO: stderr: "" Feb 19 23:49:55.785: INFO: stdout: "true" Feb 19 23:49:55.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-534c9d5619cd68d8c07ef24023a3ea32-9k8gj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4720' Feb 19 23:49:55.928: INFO: stderr: "" Feb 19 23:49:55.928: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Feb 19 23:49:55.928: INFO: e2e-test-httpd-rc-534c9d5619cd68d8c07ef24023a3ea32-9k8gj is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1700 Feb 19 23:49:55.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-4720' Feb 19 23:49:56.064: INFO: stderr: "" Feb 19 23:49:56.064: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:49:56.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4720" for this suite. • [SLOW TEST:26.510 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1689 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":280,"completed":33,"skipped":544,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:49:56.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:50:01.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6976" for this suite. • [SLOW TEST:5.367 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":280,"completed":34,"skipped":547,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:50:01.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 19 23:50:01.607: INFO: Waiting up to 5m0s for pod "pod-57e32591-2918-4f98-8da0-b1d1c182aa15" in namespace "emptydir-6830" to be "success or failure" Feb 19 23:50:01.663: INFO: Pod "pod-57e32591-2918-4f98-8da0-b1d1c182aa15": Phase="Pending", Reason="", readiness=false. Elapsed: 55.696265ms Feb 19 23:50:03.672: INFO: Pod "pod-57e32591-2918-4f98-8da0-b1d1c182aa15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064767262s Feb 19 23:50:05.679: INFO: Pod "pod-57e32591-2918-4f98-8da0-b1d1c182aa15": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0717998s Feb 19 23:50:07.685: INFO: Pod "pod-57e32591-2918-4f98-8da0-b1d1c182aa15": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077799497s Feb 19 23:50:09.693: INFO: Pod "pod-57e32591-2918-4f98-8da0-b1d1c182aa15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.085902735s STEP: Saw pod success Feb 19 23:50:09.694: INFO: Pod "pod-57e32591-2918-4f98-8da0-b1d1c182aa15" satisfied condition "success or failure" Feb 19 23:50:09.697: INFO: Trying to get logs from node jerma-node pod pod-57e32591-2918-4f98-8da0-b1d1c182aa15 container test-container: STEP: delete the pod Feb 19 23:50:09.744: INFO: Waiting for pod pod-57e32591-2918-4f98-8da0-b1d1c182aa15 to disappear Feb 19 23:50:09.788: INFO: Pod pod-57e32591-2918-4f98-8da0-b1d1c182aa15 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:50:09.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6830" for this suite. • [SLOW TEST:8.352 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":35,"skipped":557,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:50:09.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod busybox-75f086b1-e9a3-4384-b72f-814af5de8c85 in namespace container-probe-8595 Feb 19 23:50:16.121: INFO: Started pod busybox-75f086b1-e9a3-4384-b72f-814af5de8c85 in namespace container-probe-8595 STEP: checking the pod's current state and verifying that restartCount is present Feb 19 23:50:16.129: INFO: Initial restart count of pod busybox-75f086b1-e9a3-4384-b72f-814af5de8c85 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:54:17.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8595" for this suite. • [SLOW TEST:247.753 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":280,"completed":36,"skipped":562,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:54:17.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 19 23:54:17.729: INFO: Waiting up to 5m0s for pod "downwardapi-volume-684c64c2-a0f4-4b86-b8f9-a33f864d3857" in namespace "projected-1445" to be "success or failure" Feb 19 23:54:17.827: INFO: Pod "downwardapi-volume-684c64c2-a0f4-4b86-b8f9-a33f864d3857": Phase="Pending", Reason="", readiness=false. Elapsed: 97.371511ms Feb 19 23:54:19.840: INFO: Pod "downwardapi-volume-684c64c2-a0f4-4b86-b8f9-a33f864d3857": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110111669s Feb 19 23:54:21.856: INFO: Pod "downwardapi-volume-684c64c2-a0f4-4b86-b8f9-a33f864d3857": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126803705s Feb 19 23:54:23.877: INFO: Pod "downwardapi-volume-684c64c2-a0f4-4b86-b8f9-a33f864d3857": Phase="Pending", Reason="", readiness=false. Elapsed: 6.147673258s Feb 19 23:54:25.887: INFO: Pod "downwardapi-volume-684c64c2-a0f4-4b86-b8f9-a33f864d3857": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.157840004s STEP: Saw pod success Feb 19 23:54:25.888: INFO: Pod "downwardapi-volume-684c64c2-a0f4-4b86-b8f9-a33f864d3857" satisfied condition "success or failure" Feb 19 23:54:25.893: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-684c64c2-a0f4-4b86-b8f9-a33f864d3857 container client-container: STEP: delete the pod Feb 19 23:54:25.944: INFO: Waiting for pod downwardapi-volume-684c64c2-a0f4-4b86-b8f9-a33f864d3857 to disappear Feb 19 23:54:25.965: INFO: Pod downwardapi-volume-684c64c2-a0f4-4b86-b8f9-a33f864d3857 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:54:25.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1445" for this suite. • [SLOW TEST:8.413 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":280,"completed":37,"skipped":584,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:54:25.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: set up a multi version CRD Feb 19 23:54:26.124: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:54:46.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3851" for this suite. • [SLOW TEST:20.168 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":280,"completed":38,"skipped":594,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:54:46.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating the pod Feb 19 23:54:54.846: INFO: Successfully updated pod "annotationupdate5850595b-0e11-4132-83ce-aa1ede313980" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:54:56.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1515" for this suite. • [SLOW TEST:10.814 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":280,"completed":39,"skipped":608,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:54:56.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 19 23:55:05.345: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:55:05.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6090" for this suite. • [SLOW TEST:8.444 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":280,"completed":40,"skipped":616,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:55:05.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1598 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 19 23:55:05.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-609' Feb 19 23:55:05.705: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 19 23:55:05.705: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1604 Feb 19 23:55:05.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-609' Feb 19 23:55:06.058: INFO: stderr: "" Feb 19 23:55:06.059: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:55:06.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-609" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":280,"completed":41,"skipped":659,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:55:06.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:55:23.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3016" for this suite. • [SLOW TEST:17.354 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":280,"completed":42,"skipped":663,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:55:23.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating secret secrets-503/secret-test-0077eb0c-4b1d-4eab-9f65-472069776e0d STEP: Creating a pod to test consume secrets Feb 19 23:55:23.543: INFO: Waiting up to 5m0s for pod "pod-configmaps-5dd6cb14-6de9-441e-94d1-586d4991bed7" in namespace "secrets-503" to be "success or failure" Feb 19 23:55:23.608: INFO: Pod "pod-configmaps-5dd6cb14-6de9-441e-94d1-586d4991bed7": Phase="Pending", Reason="", readiness=false. Elapsed: 64.624968ms Feb 19 23:55:25.616: INFO: Pod "pod-configmaps-5dd6cb14-6de9-441e-94d1-586d4991bed7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073186683s Feb 19 23:55:27.626: INFO: Pod "pod-configmaps-5dd6cb14-6de9-441e-94d1-586d4991bed7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082808419s Feb 19 23:55:29.635: INFO: Pod "pod-configmaps-5dd6cb14-6de9-441e-94d1-586d4991bed7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092528359s Feb 19 23:55:31.644: INFO: Pod "pod-configmaps-5dd6cb14-6de9-441e-94d1-586d4991bed7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.101596856s STEP: Saw pod success Feb 19 23:55:31.645: INFO: Pod "pod-configmaps-5dd6cb14-6de9-441e-94d1-586d4991bed7" satisfied condition "success or failure" Feb 19 23:55:31.651: INFO: Trying to get logs from node jerma-node pod pod-configmaps-5dd6cb14-6de9-441e-94d1-586d4991bed7 container env-test: STEP: delete the pod Feb 19 23:55:31.722: INFO: Waiting for pod pod-configmaps-5dd6cb14-6de9-441e-94d1-586d4991bed7 to disappear Feb 19 23:55:31.733: INFO: Pod pod-configmaps-5dd6cb14-6de9-441e-94d1-586d4991bed7 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:55:31.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-503" for this suite. • [SLOW TEST:8.335 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:34 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":280,"completed":43,"skipped":672,"failed":0} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:55:31.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-projected-f2hb STEP: Creating a pod to test atomic-volume-subpath Feb 19 23:55:31.906: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-f2hb" in namespace "subpath-1372" to be "success or failure" Feb 19 23:55:31.939: INFO: Pod "pod-subpath-test-projected-f2hb": Phase="Pending", Reason="", readiness=false. Elapsed: 32.952867ms Feb 19 23:55:33.948: INFO: Pod "pod-subpath-test-projected-f2hb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041878115s Feb 19 23:55:35.958: INFO: Pod "pod-subpath-test-projected-f2hb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051954772s Feb 19 23:55:37.965: INFO: Pod "pod-subpath-test-projected-f2hb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059322187s Feb 19 23:55:39.971: INFO: Pod "pod-subpath-test-projected-f2hb": Phase="Running", Reason="", readiness=true. Elapsed: 8.064873603s Feb 19 23:55:41.982: INFO: Pod "pod-subpath-test-projected-f2hb": Phase="Running", Reason="", readiness=true. Elapsed: 10.076176258s Feb 19 23:55:43.991: INFO: Pod "pod-subpath-test-projected-f2hb": Phase="Running", Reason="", readiness=true. Elapsed: 12.084816248s Feb 19 23:55:46.000: INFO: Pod "pod-subpath-test-projected-f2hb": Phase="Running", Reason="", readiness=true. Elapsed: 14.093977048s Feb 19 23:55:48.009: INFO: Pod "pod-subpath-test-projected-f2hb": Phase="Running", Reason="", readiness=true. Elapsed: 16.102717679s Feb 19 23:55:50.018: INFO: Pod "pod-subpath-test-projected-f2hb": Phase="Running", Reason="", readiness=true. Elapsed: 18.112334695s Feb 19 23:55:52.026: INFO: Pod "pod-subpath-test-projected-f2hb": Phase="Running", Reason="", readiness=true. Elapsed: 20.11974231s Feb 19 23:55:54.042: INFO: Pod "pod-subpath-test-projected-f2hb": Phase="Running", Reason="", readiness=true. Elapsed: 22.135851778s Feb 19 23:55:56.050: INFO: Pod "pod-subpath-test-projected-f2hb": Phase="Running", Reason="", readiness=true. Elapsed: 24.144325334s Feb 19 23:55:58.061: INFO: Pod "pod-subpath-test-projected-f2hb": Phase="Running", Reason="", readiness=true. Elapsed: 26.154991721s Feb 19 23:56:00.068: INFO: Pod "pod-subpath-test-projected-f2hb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.162123681s STEP: Saw pod success Feb 19 23:56:00.068: INFO: Pod "pod-subpath-test-projected-f2hb" satisfied condition "success or failure" Feb 19 23:56:00.072: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-projected-f2hb container test-container-subpath-projected-f2hb: STEP: delete the pod Feb 19 23:56:00.149: INFO: Waiting for pod pod-subpath-test-projected-f2hb to disappear Feb 19 23:56:00.175: INFO: Pod pod-subpath-test-projected-f2hb no longer exists STEP: Deleting pod pod-subpath-test-projected-f2hb Feb 19 23:56:00.175: INFO: Deleting pod "pod-subpath-test-projected-f2hb" in namespace "subpath-1372" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:56:00.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1372" for this suite. • [SLOW TEST:28.421 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":280,"completed":44,"skipped":679,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:56:00.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Feb 19 23:56:01.504: INFO: Pod name wrapped-volume-race-203e8498-5ad5-4f76-b70f-297c3053bd7b: Found 0 pods out of 5 Feb 19 23:56:06.529: INFO: Pod name wrapped-volume-race-203e8498-5ad5-4f76-b70f-297c3053bd7b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-203e8498-5ad5-4f76-b70f-297c3053bd7b in namespace emptydir-wrapper-5220, will wait for the garbage collector to delete the pods Feb 19 23:56:36.675: INFO: Deleting ReplicationController wrapped-volume-race-203e8498-5ad5-4f76-b70f-297c3053bd7b took: 16.825007ms Feb 19 23:56:37.076: INFO: Terminating ReplicationController wrapped-volume-race-203e8498-5ad5-4f76-b70f-297c3053bd7b pods took: 401.164785ms STEP: Creating RC which spawns configmap-volume pods Feb 19 23:56:53.231: INFO: Pod name wrapped-volume-race-705c0ada-6f70-48a3-8339-46302184aa64: Found 0 pods out of 5 Feb 19 23:56:58.250: INFO: Pod name wrapped-volume-race-705c0ada-6f70-48a3-8339-46302184aa64: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-705c0ada-6f70-48a3-8339-46302184aa64 in namespace emptydir-wrapper-5220, will wait for the garbage collector to delete the pods Feb 19 23:57:26.365: INFO: Deleting ReplicationController wrapped-volume-race-705c0ada-6f70-48a3-8339-46302184aa64 took: 14.386247ms Feb 19 23:57:26.866: INFO: Terminating ReplicationController wrapped-volume-race-705c0ada-6f70-48a3-8339-46302184aa64 pods took: 500.757981ms STEP: Creating RC which spawns configmap-volume pods Feb 19 23:57:42.506: INFO: Pod name wrapped-volume-race-acf32cb2-0a09-4512-b90b-4066eebf1789: Found 0 pods out of 5 Feb 19 23:57:47.540: INFO: Pod name wrapped-volume-race-acf32cb2-0a09-4512-b90b-4066eebf1789: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-acf32cb2-0a09-4512-b90b-4066eebf1789 in namespace emptydir-wrapper-5220, will wait for the garbage collector to delete the pods Feb 19 23:58:21.697: INFO: Deleting ReplicationController wrapped-volume-race-acf32cb2-0a09-4512-b90b-4066eebf1789 took: 66.005763ms Feb 19 23:58:22.098: INFO: Terminating ReplicationController wrapped-volume-race-acf32cb2-0a09-4512-b90b-4066eebf1789 pods took: 401.055732ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:58:44.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5220" for this suite. • [SLOW TEST:164.095 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":280,"completed":45,"skipped":739,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:58:44.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1384 STEP: creating the pod Feb 19 23:58:44.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3974' Feb 19 23:58:44.827: INFO: stderr: "" Feb 19 23:58:44.827: INFO: stdout: "pod/pause created\n" Feb 19 23:58:44.827: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Feb 19 23:58:44.828: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3974" to be "running and ready" Feb 19 23:58:44.845: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 17.166547ms Feb 19 23:58:46.860: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031690996s Feb 19 23:58:48.878: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050296357s Feb 19 23:58:50.905: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.077542808s Feb 19 23:58:50.906: INFO: Pod "pause" satisfied condition "running and ready" Feb 19 23:58:50.906: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: adding the label testing-label with value testing-label-value to a pod Feb 19 23:58:50.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-3974' Feb 19 23:58:51.050: INFO: stderr: "" Feb 19 23:58:51.050: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Feb 19 23:58:51.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3974' Feb 19 23:58:51.185: INFO: stderr: "" Feb 19 23:58:51.185: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s testing-label-value\n" STEP: removing the label testing-label of a pod Feb 19 23:58:51.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-3974' Feb 19 23:58:51.325: INFO: stderr: "" Feb 19 23:58:51.325: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Feb 19 23:58:51.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3974' Feb 19 23:58:51.430: INFO: stderr: "" Feb 19 23:58:51.431: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391 STEP: using delete to clean up resources Feb 19 23:58:51.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3974' Feb 19 23:58:51.581: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 19 23:58:51.581: INFO: stdout: "pod \"pause\" force deleted\n" Feb 19 23:58:51.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-3974' Feb 19 23:58:51.768: INFO: stderr: "No resources found in kubectl-3974 namespace.\n" Feb 19 23:58:51.768: INFO: stdout: "" Feb 19 23:58:51.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-3974 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 19 23:58:52.015: INFO: stderr: "" Feb 19 23:58:52.015: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:58:52.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3974" for this suite. • [SLOW TEST:7.899 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1381 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":280,"completed":46,"skipped":740,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:58:52.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test override command Feb 19 23:58:52.624: INFO: Waiting up to 5m0s for pod "client-containers-8f587205-8279-42fd-bdfc-6c2679d5b831" in namespace "containers-4022" to be "success or failure" Feb 19 23:58:52.631: INFO: Pod "client-containers-8f587205-8279-42fd-bdfc-6c2679d5b831": Phase="Pending", Reason="", readiness=false. Elapsed: 6.537732ms Feb 19 23:58:54.639: INFO: Pod "client-containers-8f587205-8279-42fd-bdfc-6c2679d5b831": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014354284s Feb 19 23:58:56.662: INFO: Pod "client-containers-8f587205-8279-42fd-bdfc-6c2679d5b831": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037004921s Feb 19 23:58:58.671: INFO: Pod "client-containers-8f587205-8279-42fd-bdfc-6c2679d5b831": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046383166s Feb 19 23:59:00.676: INFO: Pod "client-containers-8f587205-8279-42fd-bdfc-6c2679d5b831": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051335153s STEP: Saw pod success Feb 19 23:59:00.676: INFO: Pod "client-containers-8f587205-8279-42fd-bdfc-6c2679d5b831" satisfied condition "success or failure" Feb 19 23:59:00.692: INFO: Trying to get logs from node jerma-node pod client-containers-8f587205-8279-42fd-bdfc-6c2679d5b831 container test-container: STEP: delete the pod Feb 19 23:59:00.889: INFO: Waiting for pod client-containers-8f587205-8279-42fd-bdfc-6c2679d5b831 to disappear Feb 19 23:59:02.624: INFO: Pod client-containers-8f587205-8279-42fd-bdfc-6c2679d5b831 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:59:02.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4022" for this suite. • [SLOW TEST:12.051 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":280,"completed":47,"skipped":750,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:59:04.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 19 23:59:05.265: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 19 23:59:07.329: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717753545, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717753545, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717753545, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717753545, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 19 23:59:09.441: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717753545, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717753545, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717753545, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717753545, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 19 23:59:11.338: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717753545, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717753545, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717753545, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717753545, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 19 23:59:14.377: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Feb 19 23:59:20.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-5456 to-be-attached-pod -i -c=container1' Feb 19 23:59:20.614: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:59:20.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5456" for this suite. STEP: Destroying namespace "webhook-5456-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:16.997 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":280,"completed":48,"skipped":775,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:59:21.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Feb 19 23:59:21.305: INFO: >>> kubeConfig: /root/.kube/config Feb 19 23:59:24.991: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:59:40.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6329" for this suite. • [SLOW TEST:19.308 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":280,"completed":49,"skipped":796,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:59:40.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-map-d2481095-7d92-435e-ba6c-ba0ccf87e24c STEP: Creating a pod to test consume secrets Feb 19 23:59:40.949: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3106d199-58c8-4ba1-89ed-fa06c9afea5a" in namespace "projected-3784" to be "success or failure" Feb 19 23:59:40.960: INFO: Pod "pod-projected-secrets-3106d199-58c8-4ba1-89ed-fa06c9afea5a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.759559ms Feb 19 23:59:42.973: INFO: Pod "pod-projected-secrets-3106d199-58c8-4ba1-89ed-fa06c9afea5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023474901s Feb 19 23:59:44.981: INFO: Pod "pod-projected-secrets-3106d199-58c8-4ba1-89ed-fa06c9afea5a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03174441s Feb 19 23:59:46.993: INFO: Pod "pod-projected-secrets-3106d199-58c8-4ba1-89ed-fa06c9afea5a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043151978s Feb 19 23:59:49.005: INFO: Pod "pod-projected-secrets-3106d199-58c8-4ba1-89ed-fa06c9afea5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05540376s STEP: Saw pod success Feb 19 23:59:49.005: INFO: Pod "pod-projected-secrets-3106d199-58c8-4ba1-89ed-fa06c9afea5a" satisfied condition "success or failure" Feb 19 23:59:49.011: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-3106d199-58c8-4ba1-89ed-fa06c9afea5a container projected-secret-volume-test: STEP: delete the pod Feb 19 23:59:49.181: INFO: Waiting for pod pod-projected-secrets-3106d199-58c8-4ba1-89ed-fa06c9afea5a to disappear Feb 19 23:59:49.198: INFO: Pod pod-projected-secrets-3106d199-58c8-4ba1-89ed-fa06c9afea5a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:59:49.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3784" for this suite. • [SLOW TEST:8.710 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":50,"skipped":801,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:59:49.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 19 23:59:56.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3771" for this suite. • [SLOW TEST:7.148 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":280,"completed":51,"skipped":803,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 19 23:59:56.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-map-7271ace8-9313-483a-85e1-7800fcdf1f4a STEP: Creating a pod to test consume configMaps Feb 19 23:59:56.569: INFO: Waiting up to 5m0s for pod "pod-configmaps-b07515eb-d8d2-4447-837a-191f609afc8c" in namespace "configmap-8231" to be "success or failure" Feb 19 23:59:56.580: INFO: Pod "pod-configmaps-b07515eb-d8d2-4447-837a-191f609afc8c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.129534ms Feb 19 23:59:58.589: INFO: Pod "pod-configmaps-b07515eb-d8d2-4447-837a-191f609afc8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019899061s Feb 20 00:00:00.642: INFO: Pod "pod-configmaps-b07515eb-d8d2-4447-837a-191f609afc8c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072688723s Feb 20 00:00:02.671: INFO: Pod "pod-configmaps-b07515eb-d8d2-4447-837a-191f609afc8c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101994072s Feb 20 00:00:04.676: INFO: Pod "pod-configmaps-b07515eb-d8d2-4447-837a-191f609afc8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.106975594s STEP: Saw pod success Feb 20 00:00:04.677: INFO: Pod "pod-configmaps-b07515eb-d8d2-4447-837a-191f609afc8c" satisfied condition "success or failure" Feb 20 00:00:04.680: INFO: Trying to get logs from node jerma-node pod pod-configmaps-b07515eb-d8d2-4447-837a-191f609afc8c container configmap-volume-test: STEP: delete the pod Feb 20 00:00:04.713: INFO: Waiting for pod pod-configmaps-b07515eb-d8d2-4447-837a-191f609afc8c to disappear Feb 20 00:00:04.749: INFO: Pod pod-configmaps-b07515eb-d8d2-4447-837a-191f609afc8c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:00:04.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8231" for this suite. • [SLOW TEST:8.351 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":52,"skipped":855,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:00:04.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 20 00:00:04.906: INFO: Waiting up to 5m0s for pod "pod-68575574-21bd-4193-8daa-29cee7a1c669" in namespace "emptydir-1135" to be "success or failure" Feb 20 00:00:04.918: INFO: Pod "pod-68575574-21bd-4193-8daa-29cee7a1c669": Phase="Pending", Reason="", readiness=false. Elapsed: 12.439474ms Feb 20 00:00:07.101: INFO: Pod "pod-68575574-21bd-4193-8daa-29cee7a1c669": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194776074s Feb 20 00:00:09.106: INFO: Pod "pod-68575574-21bd-4193-8daa-29cee7a1c669": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200089038s Feb 20 00:00:11.117: INFO: Pod "pod-68575574-21bd-4193-8daa-29cee7a1c669": Phase="Pending", Reason="", readiness=false. Elapsed: 6.21165832s Feb 20 00:00:13.129: INFO: Pod "pod-68575574-21bd-4193-8daa-29cee7a1c669": Phase="Pending", Reason="", readiness=false. Elapsed: 8.222862713s Feb 20 00:00:15.135: INFO: Pod "pod-68575574-21bd-4193-8daa-29cee7a1c669": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.229645411s STEP: Saw pod success Feb 20 00:00:15.136: INFO: Pod "pod-68575574-21bd-4193-8daa-29cee7a1c669" satisfied condition "success or failure" Feb 20 00:00:15.139: INFO: Trying to get logs from node jerma-node pod pod-68575574-21bd-4193-8daa-29cee7a1c669 container test-container: STEP: delete the pod Feb 20 00:00:15.176: INFO: Waiting for pod pod-68575574-21bd-4193-8daa-29cee7a1c669 to disappear Feb 20 00:00:15.200: INFO: Pod pod-68575574-21bd-4193-8daa-29cee7a1c669 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:00:15.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1135" for this suite. • [SLOW TEST:10.469 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":53,"skipped":863,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:00:15.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Feb 20 00:00:15.298: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:00:25.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1359" for this suite. • [SLOW TEST:9.955 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":280,"completed":54,"skipped":864,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:00:25.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:00:33.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2438" for this suite. • [SLOW TEST:8.173 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":280,"completed":55,"skipped":896,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:00:33.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 20 00:00:33.471: INFO: Waiting up to 5m0s for pod "pod-e286a3b6-b966-4800-9933-5d6d27b45ae2" in namespace "emptydir-4597" to be "success or failure" Feb 20 00:00:33.485: INFO: Pod "pod-e286a3b6-b966-4800-9933-5d6d27b45ae2": Phase="Pending", Reason="", readiness=false. Elapsed: 13.55926ms Feb 20 00:00:35.499: INFO: Pod "pod-e286a3b6-b966-4800-9933-5d6d27b45ae2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028188915s Feb 20 00:00:37.519: INFO: Pod "pod-e286a3b6-b966-4800-9933-5d6d27b45ae2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047752455s Feb 20 00:00:39.526: INFO: Pod "pod-e286a3b6-b966-4800-9933-5d6d27b45ae2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054688734s Feb 20 00:00:41.542: INFO: Pod "pod-e286a3b6-b966-4800-9933-5d6d27b45ae2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070918776s Feb 20 00:00:43.549: INFO: Pod "pod-e286a3b6-b966-4800-9933-5d6d27b45ae2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07777422s STEP: Saw pod success Feb 20 00:00:43.549: INFO: Pod "pod-e286a3b6-b966-4800-9933-5d6d27b45ae2" satisfied condition "success or failure" Feb 20 00:00:43.552: INFO: Trying to get logs from node jerma-node pod pod-e286a3b6-b966-4800-9933-5d6d27b45ae2 container test-container: STEP: delete the pod Feb 20 00:00:43.607: INFO: Waiting for pod pod-e286a3b6-b966-4800-9933-5d6d27b45ae2 to disappear Feb 20 00:00:43.617: INFO: Pod pod-e286a3b6-b966-4800-9933-5d6d27b45ae2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:00:43.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4597" for this suite. • [SLOW TEST:10.297 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":56,"skipped":911,"failed":0} SSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:00:43.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3684 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-3684 I0220 00:00:44.021141 9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-3684, replica count: 2 I0220 00:00:47.073181 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0220 00:00:50.073691 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0220 00:00:53.074348 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0220 00:00:56.075440 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 20 00:00:56.075: INFO: Creating new exec pod Feb 20 00:01:05.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3684 execpodpqc5g -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Feb 20 00:01:07.836: INFO: stderr: "I0220 00:01:07.603157 521 log.go:172] (0xc0000f5810) (0xc0008d40a0) Create stream\nI0220 00:01:07.603276 521 log.go:172] (0xc0000f5810) (0xc0008d40a0) Stream added, broadcasting: 1\nI0220 00:01:07.618168 521 log.go:172] (0xc0000f5810) Reply frame received for 1\nI0220 00:01:07.618339 521 log.go:172] (0xc0000f5810) (0xc00069bea0) Create stream\nI0220 00:01:07.618372 521 log.go:172] (0xc0000f5810) (0xc00069bea0) Stream added, broadcasting: 3\nI0220 00:01:07.622165 521 log.go:172] (0xc0000f5810) Reply frame received for 3\nI0220 00:01:07.622248 521 log.go:172] (0xc0000f5810) (0xc0006ea000) Create stream\nI0220 00:01:07.622263 521 log.go:172] (0xc0000f5810) (0xc0006ea000) Stream added, broadcasting: 5\nI0220 00:01:07.625220 521 log.go:172] (0xc0000f5810) Reply frame received for 5\nI0220 00:01:07.710368 521 log.go:172] (0xc0000f5810) Data frame received for 5\nI0220 00:01:07.710895 521 log.go:172] (0xc0006ea000) (5) Data frame handling\nI0220 00:01:07.711002 521 log.go:172] (0xc0006ea000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0220 00:01:07.723511 521 log.go:172] (0xc0000f5810) Data frame received for 5\nI0220 00:01:07.723588 521 log.go:172] (0xc0006ea000) (5) Data frame handling\nI0220 00:01:07.723611 521 log.go:172] (0xc0006ea000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0220 00:01:07.814693 521 log.go:172] (0xc0000f5810) Data frame received for 1\nI0220 00:01:07.814932 521 log.go:172] (0xc0000f5810) (0xc00069bea0) Stream removed, broadcasting: 3\nI0220 00:01:07.815201 521 log.go:172] (0xc0008d40a0) (1) Data frame handling\nI0220 00:01:07.815240 521 log.go:172] (0xc0008d40a0) (1) Data frame sent\nI0220 00:01:07.815306 521 log.go:172] (0xc0000f5810) (0xc0006ea000) Stream removed, broadcasting: 5\nI0220 00:01:07.815364 521 log.go:172] (0xc0000f5810) (0xc0008d40a0) Stream removed, broadcasting: 1\nI0220 00:01:07.815385 521 log.go:172] (0xc0000f5810) Go away received\nI0220 00:01:07.817881 521 log.go:172] (0xc0000f5810) (0xc0008d40a0) Stream removed, broadcasting: 1\nI0220 00:01:07.818188 521 log.go:172] (0xc0000f5810) (0xc00069bea0) Stream removed, broadcasting: 3\nI0220 00:01:07.818247 521 log.go:172] (0xc0000f5810) (0xc0006ea000) Stream removed, broadcasting: 5\n" Feb 20 00:01:07.836: INFO: stdout: "" Feb 20 00:01:07.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3684 execpodpqc5g -- /bin/sh -x -c nc -zv -t -w 2 10.96.32.148 80' Feb 20 00:01:08.200: INFO: stderr: "I0220 00:01:08.008278 553 log.go:172] (0xc000b111e0) (0xc0009e0640) Create stream\nI0220 00:01:08.008445 553 log.go:172] (0xc000b111e0) (0xc0009e0640) Stream added, broadcasting: 1\nI0220 00:01:08.015190 553 log.go:172] (0xc000b111e0) Reply frame received for 1\nI0220 00:01:08.015254 553 log.go:172] (0xc000b111e0) (0xc00058c820) Create stream\nI0220 00:01:08.015291 553 log.go:172] (0xc000b111e0) (0xc00058c820) Stream added, broadcasting: 3\nI0220 00:01:08.017170 553 log.go:172] (0xc000b111e0) Reply frame received for 3\nI0220 00:01:08.017308 553 log.go:172] (0xc000b111e0) (0xc0006c9c20) Create stream\nI0220 00:01:08.017332 553 log.go:172] (0xc000b111e0) (0xc0006c9c20) Stream added, broadcasting: 5\nI0220 00:01:08.019098 553 log.go:172] (0xc000b111e0) Reply frame received for 5\nI0220 00:01:08.108583 553 log.go:172] (0xc000b111e0) Data frame received for 5\nI0220 00:01:08.108629 553 log.go:172] (0xc0006c9c20) (5) Data frame handling\nI0220 00:01:08.108655 553 log.go:172] (0xc0006c9c20) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.32.148 80\nI0220 00:01:08.111349 553 log.go:172] (0xc000b111e0) Data frame received for 5\nI0220 00:01:08.111376 553 log.go:172] (0xc0006c9c20) (5) Data frame handling\nI0220 00:01:08.111393 553 log.go:172] (0xc0006c9c20) (5) Data frame sent\nConnection to 10.96.32.148 80 port [tcp/http] succeeded!\nI0220 00:01:08.179292 553 log.go:172] (0xc000b111e0) (0xc00058c820) Stream removed, broadcasting: 3\nI0220 00:01:08.179564 553 log.go:172] (0xc000b111e0) Data frame received for 1\nI0220 00:01:08.181686 553 log.go:172] (0xc000b111e0) (0xc0006c9c20) Stream removed, broadcasting: 5\nI0220 00:01:08.184268 553 log.go:172] (0xc0009e0640) (1) Data frame handling\nI0220 00:01:08.185810 553 log.go:172] (0xc0009e0640) (1) Data frame sent\nI0220 00:01:08.185979 553 log.go:172] (0xc000b111e0) (0xc0009e0640) Stream removed, broadcasting: 1\nI0220 00:01:08.186049 553 log.go:172] (0xc000b111e0) Go away received\nI0220 00:01:08.190065 553 log.go:172] (0xc000b111e0) (0xc0009e0640) Stream removed, broadcasting: 1\nI0220 00:01:08.190318 553 log.go:172] (0xc000b111e0) (0xc00058c820) Stream removed, broadcasting: 3\nI0220 00:01:08.190360 553 log.go:172] (0xc000b111e0) (0xc0006c9c20) Stream removed, broadcasting: 5\n" Feb 20 00:01:08.200: INFO: stdout: "" Feb 20 00:01:08.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3684 execpodpqc5g -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 32735' Feb 20 00:01:08.588: INFO: stderr: "I0220 00:01:08.394663 576 log.go:172] (0xc0007c29a0) (0xc0007b81e0) Create stream\nI0220 00:01:08.394789 576 log.go:172] (0xc0007c29a0) (0xc0007b81e0) Stream added, broadcasting: 1\nI0220 00:01:08.398367 576 log.go:172] (0xc0007c29a0) Reply frame received for 1\nI0220 00:01:08.398420 576 log.go:172] (0xc0007c29a0) (0xc000636d20) Create stream\nI0220 00:01:08.398430 576 log.go:172] (0xc0007c29a0) (0xc000636d20) Stream added, broadcasting: 3\nI0220 00:01:08.399449 576 log.go:172] (0xc0007c29a0) Reply frame received for 3\nI0220 00:01:08.399487 576 log.go:172] (0xc0007c29a0) (0xc0005d7c20) Create stream\nI0220 00:01:08.399525 576 log.go:172] (0xc0007c29a0) (0xc0005d7c20) Stream added, broadcasting: 5\nI0220 00:01:08.400621 576 log.go:172] (0xc0007c29a0) Reply frame received for 5\nI0220 00:01:08.471615 576 log.go:172] (0xc0007c29a0) Data frame received for 5\nI0220 00:01:08.471780 576 log.go:172] (0xc0005d7c20) (5) Data frame handling\nI0220 00:01:08.471850 576 log.go:172] (0xc0005d7c20) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 32735\nI0220 00:01:08.474902 576 log.go:172] (0xc0007c29a0) Data frame received for 5\nI0220 00:01:08.474928 576 log.go:172] (0xc0005d7c20) (5) Data frame handling\nI0220 00:01:08.474948 576 log.go:172] (0xc0005d7c20) (5) Data frame sent\nConnection to 10.96.2.250 32735 port [tcp/32735] succeeded!\nI0220 00:01:08.567123 576 log.go:172] (0xc0007c29a0) Data frame received for 1\nI0220 00:01:08.567622 576 log.go:172] (0xc0007c29a0) (0xc000636d20) Stream removed, broadcasting: 3\nI0220 00:01:08.567754 576 log.go:172] (0xc0007b81e0) (1) Data frame handling\nI0220 00:01:08.567777 576 log.go:172] (0xc0007b81e0) (1) Data frame sent\nI0220 00:01:08.567936 576 log.go:172] (0xc0007c29a0) (0xc0005d7c20) Stream removed, broadcasting: 5\nI0220 00:01:08.567995 576 log.go:172] (0xc0007c29a0) (0xc0007b81e0) Stream removed, broadcasting: 1\nI0220 00:01:08.568015 576 log.go:172] (0xc0007c29a0) Go away received\nI0220 00:01:08.569414 576 log.go:172] (0xc0007c29a0) (0xc0007b81e0) Stream removed, broadcasting: 1\nI0220 00:01:08.569432 576 log.go:172] (0xc0007c29a0) (0xc000636d20) Stream removed, broadcasting: 3\nI0220 00:01:08.569442 576 log.go:172] (0xc0007c29a0) (0xc0005d7c20) Stream removed, broadcasting: 5\n" Feb 20 00:01:08.588: INFO: stdout: "" Feb 20 00:01:08.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3684 execpodpqc5g -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 32735' Feb 20 00:01:09.004: INFO: stderr: "I0220 00:01:08.731184 599 log.go:172] (0xc000bd4e70) (0xc000b801e0) Create stream\nI0220 00:01:08.731344 599 log.go:172] (0xc000bd4e70) (0xc000b801e0) Stream added, broadcasting: 1\nI0220 00:01:08.736407 599 log.go:172] (0xc000bd4e70) Reply frame received for 1\nI0220 00:01:08.736451 599 log.go:172] (0xc000bd4e70) (0xc000b12320) Create stream\nI0220 00:01:08.736463 599 log.go:172] (0xc000bd4e70) (0xc000b12320) Stream added, broadcasting: 3\nI0220 00:01:08.737412 599 log.go:172] (0xc000bd4e70) Reply frame received for 3\nI0220 00:01:08.737458 599 log.go:172] (0xc000bd4e70) (0xc000a945a0) Create stream\nI0220 00:01:08.737468 599 log.go:172] (0xc000bd4e70) (0xc000a945a0) Stream added, broadcasting: 5\nI0220 00:01:08.739291 599 log.go:172] (0xc000bd4e70) Reply frame received for 5\nI0220 00:01:08.834702 599 log.go:172] (0xc000bd4e70) Data frame received for 5\nI0220 00:01:08.834768 599 log.go:172] (0xc000a945a0) (5) Data frame handling\nI0220 00:01:08.834802 599 log.go:172] (0xc000a945a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 32735\nI0220 00:01:08.840376 599 log.go:172] (0xc000bd4e70) Data frame received for 5\nI0220 00:01:08.840552 599 log.go:172] (0xc000a945a0) (5) Data frame handling\nI0220 00:01:08.840583 599 log.go:172] (0xc000a945a0) (5) Data frame sent\nConnection to 10.96.1.234 32735 port [tcp/32735] succeeded!\nI0220 00:01:08.985608 599 log.go:172] (0xc000bd4e70) Data frame received for 1\nI0220 00:01:08.985784 599 log.go:172] (0xc000b801e0) (1) Data frame handling\nI0220 00:01:08.985828 599 log.go:172] (0xc000b801e0) (1) Data frame sent\nI0220 00:01:08.986050 599 log.go:172] (0xc000bd4e70) (0xc000b801e0) Stream removed, broadcasting: 1\nI0220 00:01:08.986612 599 log.go:172] (0xc000bd4e70) (0xc000a945a0) Stream removed, broadcasting: 5\nI0220 00:01:08.987243 599 log.go:172] (0xc000bd4e70) (0xc000b12320) Stream removed, broadcasting: 3\nI0220 00:01:08.987328 599 log.go:172] (0xc000bd4e70) Go away received\nI0220 00:01:08.987602 599 log.go:172] (0xc000bd4e70) (0xc000b801e0) Stream removed, broadcasting: 1\nI0220 00:01:08.987651 599 log.go:172] (0xc000bd4e70) (0xc000b12320) Stream removed, broadcasting: 3\nI0220 00:01:08.987662 599 log.go:172] (0xc000bd4e70) (0xc000a945a0) Stream removed, broadcasting: 5\n" Feb 20 00:01:09.005: INFO: stdout: "" Feb 20 00:01:09.005: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:01:09.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3684" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:25.546 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":280,"completed":57,"skipped":916,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:01:09.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-9456b9f9-3895-4c14-8657-551dce0d8ace STEP: Creating a pod to test consume secrets Feb 20 00:01:09.308: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-927224ac-6a4b-4445-9acf-b364d37f5434" in namespace "projected-6172" to be "success or failure" Feb 20 00:01:09.343: INFO: Pod "pod-projected-secrets-927224ac-6a4b-4445-9acf-b364d37f5434": Phase="Pending", Reason="", readiness=false. Elapsed: 35.189077ms Feb 20 00:01:11.353: INFO: Pod "pod-projected-secrets-927224ac-6a4b-4445-9acf-b364d37f5434": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045091182s Feb 20 00:01:13.367: INFO: Pod "pod-projected-secrets-927224ac-6a4b-4445-9acf-b364d37f5434": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058869194s Feb 20 00:01:15.385: INFO: Pod "pod-projected-secrets-927224ac-6a4b-4445-9acf-b364d37f5434": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077041139s Feb 20 00:01:18.787: INFO: Pod "pod-projected-secrets-927224ac-6a4b-4445-9acf-b364d37f5434": Phase="Pending", Reason="", readiness=false. Elapsed: 9.47842927s Feb 20 00:01:20.820: INFO: Pod "pod-projected-secrets-927224ac-6a4b-4445-9acf-b364d37f5434": Phase="Pending", Reason="", readiness=false. Elapsed: 11.512180384s Feb 20 00:01:22.833: INFO: Pod "pod-projected-secrets-927224ac-6a4b-4445-9acf-b364d37f5434": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.524534573s STEP: Saw pod success Feb 20 00:01:22.833: INFO: Pod "pod-projected-secrets-927224ac-6a4b-4445-9acf-b364d37f5434" satisfied condition "success or failure" Feb 20 00:01:22.837: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-927224ac-6a4b-4445-9acf-b364d37f5434 container projected-secret-volume-test: STEP: delete the pod Feb 20 00:01:23.004: INFO: Waiting for pod pod-projected-secrets-927224ac-6a4b-4445-9acf-b364d37f5434 to disappear Feb 20 00:01:23.012: INFO: Pod pod-projected-secrets-927224ac-6a4b-4445-9acf-b364d37f5434 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:01:23.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6172" for this suite. • [SLOW TEST:13.833 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":58,"skipped":917,"failed":0} SSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:01:23.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 20 00:01:23.205: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Feb 20 00:01:25.294: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:01:26.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5905" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":280,"completed":59,"skipped":925,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:01:26.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1899 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 20 00:01:27.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-4393' Feb 20 00:01:27.374: INFO: stderr: "" Feb 20 00:01:27.374: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Feb 20 00:01:42.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-4393 -o json' Feb 20 00:01:42.546: INFO: stderr: "" Feb 20 00:01:42.546: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-02-20T00:01:27Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-4393\",\n \"resourceVersion\": \"9495827\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-4393/pods/e2e-test-httpd-pod\",\n \"uid\": \"b5b1f9b8-cf28-4a09-8e27-37faa3c5c8d0\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-s7c66\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-node\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-s7c66\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-s7c66\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-20T00:01:27Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-20T00:01:39Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-20T00:01:39Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-20T00:01:27Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://357018425ce0ca5416b520e33c9cef644ecb01fc9a11b64675e4ea4f8caa312e\",\n \"image\": \"httpd:2.4.38-alpine\",\n \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-02-20T00:01:37Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.2.250\",\n \"phase\": \"Running\",\n \"podIP\": \"10.44.0.2\",\n \"podIPs\": [\n {\n \"ip\": \"10.44.0.2\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-02-20T00:01:27Z\"\n }\n}\n" STEP: replace the image in the pod Feb 20 00:01:42.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-4393' Feb 20 00:01:42.928: INFO: stderr: "" Feb 20 00:01:42.929: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1904 Feb 20 00:01:42.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4393' Feb 20 00:01:49.525: INFO: stderr: "" Feb 20 00:01:49.525: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:01:49.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4393" for this suite. • [SLOW TEST:23.162 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1895 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":280,"completed":60,"skipped":942,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:01:49.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-1859 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a new StatefulSet Feb 20 00:01:49.886: INFO: Found 0 stateful pods, waiting for 3 Feb 20 00:01:59.900: INFO: Found 2 stateful pods, waiting for 3 Feb 20 00:02:09.899: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 20 00:02:09.899: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 20 00:02:09.899: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 20 00:02:19.896: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 20 00:02:19.897: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 20 00:02:19.897: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Feb 20 00:02:19.927: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Feb 20 00:02:29.988: INFO: Updating stateful set ss2 Feb 20 00:02:30.029: INFO: Waiting for Pod statefulset-1859/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 20 00:02:40.049: INFO: Waiting for Pod statefulset-1859/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Feb 20 00:02:50.358: INFO: Found 2 stateful pods, waiting for 3 Feb 20 00:03:00.371: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 20 00:03:00.371: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 20 00:03:00.371: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 20 00:03:10.369: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 20 00:03:10.369: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 20 00:03:10.369: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Feb 20 00:03:10.402: INFO: Updating stateful set ss2 Feb 20 00:03:10.483: INFO: Waiting for Pod statefulset-1859/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 20 00:03:21.121: INFO: Updating stateful set ss2 Feb 20 00:03:21.156: INFO: Waiting for StatefulSet statefulset-1859/ss2 to complete update Feb 20 00:03:21.156: INFO: Waiting for Pod statefulset-1859/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 20 00:03:31.170: INFO: Waiting for StatefulSet statefulset-1859/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Feb 20 00:03:41.173: INFO: Deleting all statefulset in ns statefulset-1859 Feb 20 00:03:41.177: INFO: Scaling statefulset ss2 to 0 Feb 20 00:04:21.233: INFO: Waiting for statefulset status.replicas updated to 0 Feb 20 00:04:21.239: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:04:21.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1859" for this suite. • [SLOW TEST:151.736 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":280,"completed":61,"skipped":951,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:04:21.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name s-test-opt-del-881048bf-1728-496e-b23b-7c7a13675f1a STEP: Creating secret with name s-test-opt-upd-f4939b93-16a4-489e-91e1-ddd99488067c STEP: Creating the pod STEP: Deleting secret s-test-opt-del-881048bf-1728-496e-b23b-7c7a13675f1a STEP: Updating secret s-test-opt-upd-f4939b93-16a4-489e-91e1-ddd99488067c STEP: Creating secret with name s-test-opt-create-3ddbc481-a42e-43b0-bae7-eef83a70726b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:04:38.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4316" for this suite. • [SLOW TEST:16.758 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":62,"skipped":968,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:04:38.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with configMap that has name projected-configmap-test-upd-f1f6018d-b288-4c6e-b144-7959aaf39a80 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-f1f6018d-b288-4c6e-b144-7959aaf39a80 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:04:50.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-435" for this suite. • [SLOW TEST:12.318 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":63,"skipped":974,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:04:50.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 20 00:04:50.531: INFO: Number of nodes with available pods: 0 Feb 20 00:04:50.531: INFO: Node jerma-node is running more than one daemon pod Feb 20 00:04:51.546: INFO: Number of nodes with available pods: 0 Feb 20 00:04:51.546: INFO: Node jerma-node is running more than one daemon pod Feb 20 00:04:52.866: INFO: Number of nodes with available pods: 0 Feb 20 00:04:52.867: INFO: Node jerma-node is running more than one daemon pod Feb 20 00:04:53.557: INFO: Number of nodes with available pods: 0 Feb 20 00:04:53.558: INFO: Node jerma-node is running more than one daemon pod Feb 20 00:04:54.564: INFO: Number of nodes with available pods: 0 Feb 20 00:04:54.564: INFO: Node jerma-node is running more than one daemon pod Feb 20 00:04:55.598: INFO: Number of nodes with available pods: 0 Feb 20 00:04:55.598: INFO: Node jerma-node is running more than one daemon pod Feb 20 00:04:57.348: INFO: Number of nodes with available pods: 0 Feb 20 00:04:57.349: INFO: Node jerma-node is running more than one daemon pod Feb 20 00:04:58.041: INFO: Number of nodes with available pods: 0 Feb 20 00:04:58.041: INFO: Node jerma-node is running more than one daemon pod Feb 20 00:04:58.669: INFO: Number of nodes with available pods: 0 Feb 20 00:04:58.669: INFO: Node jerma-node is running more than one daemon pod Feb 20 00:04:59.555: INFO: Number of nodes with available pods: 1 Feb 20 00:04:59.555: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 20 00:05:00.546: INFO: Number of nodes with available pods: 1 Feb 20 00:05:00.547: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 20 00:05:01.548: INFO: Number of nodes with available pods: 2 Feb 20 00:05:01.549: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Feb 20 00:05:01.613: INFO: Number of nodes with available pods: 2 Feb 20 00:05:01.613: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1082, will wait for the garbage collector to delete the pods Feb 20 00:05:02.831: INFO: Deleting DaemonSet.extensions daemon-set took: 13.536709ms Feb 20 00:05:03.232: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.715786ms Feb 20 00:05:13.240: INFO: Number of nodes with available pods: 0 Feb 20 00:05:13.240: INFO: Number of running nodes: 0, number of available pods: 0 Feb 20 00:05:13.251: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1082/daemonsets","resourceVersion":"9496742"},"items":null} Feb 20 00:05:13.256: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1082/pods","resourceVersion":"9496742"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:05:13.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1082" for this suite. • [SLOW TEST:22.971 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":280,"completed":64,"skipped":977,"failed":0} [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:05:13.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Feb 20 00:05:23.587: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-8548 PodName:pod-sharedvolume-a3933d6b-6311-4034-a3be-1ac76d70a9e2 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 00:05:23.587: INFO: >>> kubeConfig: /root/.kube/config I0220 00:05:23.651348 9 log.go:172] (0xc00287d600) (0xc000471180) Create stream I0220 00:05:23.651493 9 log.go:172] (0xc00287d600) (0xc000471180) Stream added, broadcasting: 1 I0220 00:05:23.656968 9 log.go:172] (0xc00287d600) Reply frame received for 1 I0220 00:05:23.657052 9 log.go:172] (0xc00287d600) (0xc0003cabe0) Create stream I0220 00:05:23.657103 9 log.go:172] (0xc00287d600) (0xc0003cabe0) Stream added, broadcasting: 3 I0220 00:05:23.658696 9 log.go:172] (0xc00287d600) Reply frame received for 3 I0220 00:05:23.658734 9 log.go:172] (0xc00287d600) (0xc000471400) Create stream I0220 00:05:23.658748 9 log.go:172] (0xc00287d600) (0xc000471400) Stream added, broadcasting: 5 I0220 00:05:23.660091 9 log.go:172] (0xc00287d600) Reply frame received for 5 I0220 00:05:23.754122 9 log.go:172] (0xc00287d600) Data frame received for 3 I0220 00:05:23.754327 9 log.go:172] (0xc0003cabe0) (3) Data frame handling I0220 00:05:23.754364 9 log.go:172] (0xc0003cabe0) (3) Data frame sent I0220 00:05:23.925563 9 log.go:172] (0xc00287d600) (0xc0003cabe0) Stream removed, broadcasting: 3 I0220 00:05:23.926231 9 log.go:172] (0xc00287d600) Data frame received for 1 I0220 00:05:23.926285 9 log.go:172] (0xc000471180) (1) Data frame handling I0220 00:05:23.926345 9 log.go:172] (0xc000471180) (1) Data frame sent I0220 00:05:23.926685 9 log.go:172] (0xc00287d600) (0xc000471180) Stream removed, broadcasting: 1 I0220 00:05:23.926723 9 log.go:172] (0xc00287d600) (0xc000471400) Stream removed, broadcasting: 5 I0220 00:05:23.926776 9 log.go:172] (0xc00287d600) Go away received I0220 00:05:23.928688 9 log.go:172] (0xc00287d600) (0xc000471180) Stream removed, broadcasting: 1 I0220 00:05:23.928717 9 log.go:172] (0xc00287d600) (0xc0003cabe0) Stream removed, broadcasting: 3 I0220 00:05:23.928733 9 log.go:172] (0xc00287d600) (0xc000471400) Stream removed, broadcasting: 5 Feb 20 00:05:23.928: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:05:23.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8548" for this suite. • [SLOW TEST:10.653 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":280,"completed":65,"skipped":977,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:05:23.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1466 STEP: creating an pod Feb 20 00:05:24.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-1568 -- logs-generator --log-lines-total 100 --run-duration 20s' Feb 20 00:05:24.376: INFO: stderr: "" Feb 20 00:05:24.376: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Waiting for log generator to start. Feb 20 00:05:24.376: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Feb 20 00:05:24.376: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-1568" to be "running and ready, or succeeded" Feb 20 00:05:24.405: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 28.13448ms Feb 20 00:05:26.417: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040650858s Feb 20 00:05:28.427: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051021507s Feb 20 00:05:30.440: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063512239s Feb 20 00:05:32.451: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 8.074750985s Feb 20 00:05:32.452: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Feb 20 00:05:32.452: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Feb 20 00:05:32.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1568' Feb 20 00:05:32.736: INFO: stderr: "" Feb 20 00:05:32.737: INFO: stdout: "I0220 00:05:29.298606 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/pkcm 208\nI0220 00:05:29.498816 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/86r 321\nI0220 00:05:29.698967 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/hzw 583\nI0220 00:05:29.898966 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/7rv 422\nI0220 00:05:30.098876 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/pxz6 357\nI0220 00:05:30.301533 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/qh7 311\nI0220 00:05:30.499032 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/5n7 451\nI0220 00:05:30.699030 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/lv6r 416\nI0220 00:05:30.898900 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/bj5b 396\nI0220 00:05:31.098856 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/nbz 463\nI0220 00:05:31.298928 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/nqc 509\nI0220 00:05:31.498923 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/pn6f 531\nI0220 00:05:31.698874 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/njs 408\nI0220 00:05:31.898963 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/tvg 531\nI0220 00:05:32.099008 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/fb86 321\nI0220 00:05:32.302444 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/qhj2 208\nI0220 00:05:32.498806 1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/lhnk 350\nI0220 00:05:32.698859 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/8wp6 479\n" STEP: limiting log lines Feb 20 00:05:32.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1568 --tail=1' Feb 20 00:05:32.877: INFO: stderr: "" Feb 20 00:05:32.878: INFO: stdout: "I0220 00:05:32.698859 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/8wp6 479\n" Feb 20 00:05:32.878: INFO: got output "I0220 00:05:32.698859 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/8wp6 479\n" STEP: limiting log bytes Feb 20 00:05:32.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1568 --limit-bytes=1' Feb 20 00:05:32.972: INFO: stderr: "" Feb 20 00:05:32.972: INFO: stdout: "I" Feb 20 00:05:32.972: INFO: got output "I" STEP: exposing timestamps Feb 20 00:05:32.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1568 --tail=1 --timestamps' Feb 20 00:05:33.093: INFO: stderr: "" Feb 20 00:05:33.094: INFO: stdout: "2020-02-20T00:05:32.899142719Z I0220 00:05:32.898804 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/jk4 573\n" Feb 20 00:05:33.094: INFO: got output "2020-02-20T00:05:32.899142719Z I0220 00:05:32.898804 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/jk4 573\n" STEP: restricting to a time range Feb 20 00:05:35.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1568 --since=1s' Feb 20 00:05:35.761: INFO: stderr: "" Feb 20 00:05:35.761: INFO: stdout: "I0220 00:05:34.899242 1 logs_generator.go:76] 28 GET /api/v1/namespaces/default/pods/mmld 454\nI0220 00:05:35.099524 1 logs_generator.go:76] 29 GET /api/v1/namespaces/ns/pods/fwc 595\nI0220 00:05:35.298987 1 logs_generator.go:76] 30 GET /api/v1/namespaces/kube-system/pods/qnsr 213\nI0220 00:05:35.499098 1 logs_generator.go:76] 31 PUT /api/v1/namespaces/ns/pods/2dmt 357\nI0220 00:05:35.699656 1 logs_generator.go:76] 32 POST /api/v1/namespaces/ns/pods/k8m 526\n" Feb 20 00:05:35.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1568 --since=24h' Feb 20 00:05:35.884: INFO: stderr: "" Feb 20 00:05:35.885: INFO: stdout: "I0220 00:05:29.298606 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/pkcm 208\nI0220 00:05:29.498816 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/86r 321\nI0220 00:05:29.698967 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/hzw 583\nI0220 00:05:29.898966 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/7rv 422\nI0220 00:05:30.098876 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/pxz6 357\nI0220 00:05:30.301533 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/qh7 311\nI0220 00:05:30.499032 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/5n7 451\nI0220 00:05:30.699030 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/lv6r 416\nI0220 00:05:30.898900 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/bj5b 396\nI0220 00:05:31.098856 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/nbz 463\nI0220 00:05:31.298928 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/nqc 509\nI0220 00:05:31.498923 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/pn6f 531\nI0220 00:05:31.698874 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/njs 408\nI0220 00:05:31.898963 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/tvg 531\nI0220 00:05:32.099008 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/fb86 321\nI0220 00:05:32.302444 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/qhj2 208\nI0220 00:05:32.498806 1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/lhnk 350\nI0220 00:05:32.698859 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/8wp6 479\nI0220 00:05:32.898804 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/jk4 573\nI0220 00:05:33.098869 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/wlg4 575\nI0220 00:05:33.298920 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/km2 374\nI0220 00:05:33.498901 1 logs_generator.go:76] 21 GET /api/v1/namespaces/kube-system/pods/zrzl 565\nI0220 00:05:33.698942 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/knh 232\nI0220 00:05:33.898845 1 logs_generator.go:76] 23 GET /api/v1/namespaces/kube-system/pods/9h7 493\nI0220 00:05:34.099056 1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/sj99 333\nI0220 00:05:34.298940 1 logs_generator.go:76] 25 POST /api/v1/namespaces/kube-system/pods/pp7k 233\nI0220 00:05:34.499180 1 logs_generator.go:76] 26 GET /api/v1/namespaces/default/pods/m8km 409\nI0220 00:05:34.699089 1 logs_generator.go:76] 27 GET /api/v1/namespaces/kube-system/pods/f9r 559\nI0220 00:05:34.899242 1 logs_generator.go:76] 28 GET /api/v1/namespaces/default/pods/mmld 454\nI0220 00:05:35.099524 1 logs_generator.go:76] 29 GET /api/v1/namespaces/ns/pods/fwc 595\nI0220 00:05:35.298987 1 logs_generator.go:76] 30 GET /api/v1/namespaces/kube-system/pods/qnsr 213\nI0220 00:05:35.499098 1 logs_generator.go:76] 31 PUT /api/v1/namespaces/ns/pods/2dmt 357\nI0220 00:05:35.699656 1 logs_generator.go:76] 32 POST /api/v1/namespaces/ns/pods/k8m 526\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1472 Feb 20 00:05:35.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-1568' Feb 20 00:05:40.842: INFO: stderr: "" Feb 20 00:05:40.843: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:05:40.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1568" for this suite. • [SLOW TEST:16.870 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1462 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":280,"completed":66,"skipped":1002,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:05:40.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 20 00:05:40.967: INFO: Creating deployment "webserver-deployment" Feb 20 00:05:40.974: INFO: Waiting for observed generation 1 Feb 20 00:05:44.260: INFO: Waiting for all required pods to come up Feb 20 00:05:44.479: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Feb 20 00:06:10.710: INFO: Waiting for deployment "webserver-deployment" to complete Feb 20 00:06:10.719: INFO: Updating deployment "webserver-deployment" with a non-existent image Feb 20 00:06:10.728: INFO: Updating deployment webserver-deployment Feb 20 00:06:10.728: INFO: Waiting for observed generation 2 Feb 20 00:06:12.917: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Feb 20 00:06:12.925: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Feb 20 00:06:13.426: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Feb 20 00:06:13.720: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Feb 20 00:06:13.721: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Feb 20 00:06:13.835: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Feb 20 00:06:13.843: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Feb 20 00:06:13.843: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Feb 20 00:06:13.875: INFO: Updating deployment webserver-deployment Feb 20 00:06:13.875: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Feb 20 00:06:14.825: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Feb 20 00:06:20.435: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Feb 20 00:06:24.381: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-4940 /apis/apps/v1/namespaces/deployment-4940/deployments/webserver-deployment 5315a8d7-a176-4f6b-af37-fd4626e972e4 9497200 3 2020-02-20 00:05:40 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0058f41d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-02-20 00:06:14 +0000 UTC,LastTransitionTime:2020-02-20 00:06:14 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-02-20 00:06:15 +0000 UTC,LastTransitionTime:2020-02-20 00:05:41 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Feb 20 00:06:26.041: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-4940 /apis/apps/v1/namespaces/deployment-4940/replicasets/webserver-deployment-c7997dcc8 895caeb0-43fe-41e7-bd66-c4c8bd44c33c 9497173 3 2020-02-20 00:06:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 5315a8d7-a176-4f6b-af37-fd4626e972e4 0xc00589f4f7 0xc00589f4f8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00589f708 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 20 00:06:26.041: INFO: All old ReplicaSets of Deployment "webserver-deployment": Feb 20 00:06:26.042: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-4940 /apis/apps/v1/namespaces/deployment-4940/replicasets/webserver-deployment-595b5b9587 f5a43879-d1f1-4d15-9e64-1af3fa5d350f 9497192 3 2020-02-20 00:05:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 5315a8d7-a176-4f6b-af37-fd4626e972e4 0xc00589f307 0xc00589f308}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00589f428 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Feb 20 00:06:26.129: INFO: Pod "webserver-deployment-595b5b9587-6qk8l" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6qk8l webserver-deployment-595b5b9587- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-595b5b9587-6qk8l 8bec9007-a10d-4ba5-852e-b319e885dc4e 9497150 0 2020-02-20 00:06:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f5a43879-d1f1-4d15-9e64-1af3fa5d350f 0xc00328f3d7 0xc00328f3d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.130: INFO: Pod "webserver-deployment-595b5b9587-7fhvm" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7fhvm webserver-deployment-595b5b9587- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-595b5b9587-7fhvm afd65c5a-45ba-460d-9b23-d39f11ac7fb5 9497013 0 2020-02-20 00:05:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f5a43879-d1f1-4d15-9e64-1af3fa5d350f 0xc00328f4f7 0xc00328f4f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:05:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:05:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.5,StartTime:2020-02-20 00:05:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-20 00:06:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://90ed42836f53282cead04a917f866e2a8a42c830313977a187fcdf507b495389,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.131: INFO: Pod "webserver-deployment-595b5b9587-8dkbr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8dkbr webserver-deployment-595b5b9587- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-595b5b9587-8dkbr 3116eb4d-e7b9-4ff0-81d4-d31a06e24daf 9497209 0 2020-02-20 00:06:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f5a43879-d1f1-4d15-9e64-1af3fa5d350f 0xc00328f6b0 0xc00328f6b1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-20 00:06:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.131: INFO: Pod "webserver-deployment-595b5b9587-9njq9" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9njq9 webserver-deployment-595b5b9587- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-595b5b9587-9njq9 6fd0ac01-f17d-4e73-adb8-738323c1811c 9497181 0 2020-02-20 00:06:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f5a43879-d1f1-4d15-9e64-1af3fa5d350f 0xc00328f837 0xc00328f838}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.132: INFO: Pod "webserver-deployment-595b5b9587-b2kn9" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-b2kn9 webserver-deployment-595b5b9587- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-595b5b9587-b2kn9 0179a1a2-ce0b-4213-a7b7-4fc1ca766d07 9497016 0 2020-02-20 00:05:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f5a43879-d1f1-4d15-9e64-1af3fa5d350f 0xc00328f957 0xc00328f958}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:05:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:05:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.6,StartTime:2020-02-20 00:05:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-20 00:06:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://b7a85f240f273b8fd7886c64311c3e5cfa95677e0a76f501f876f89d136f4c0d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.133: INFO: Pod "webserver-deployment-595b5b9587-b67z4" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-b67z4 webserver-deployment-595b5b9587- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-595b5b9587-b67z4 cc2e26db-5c4b-455f-81dd-83d320a61a20 9497007 0 2020-02-20 00:05:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f5a43879-d1f1-4d15-9e64-1af3fa5d350f 0xc00328faf0 0xc00328faf1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:05:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:05:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.7,StartTime:2020-02-20 00:05:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-20 00:06:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://9cf7ce7aa7248aa880abfeb8cd08a20a39e90d033602568138dc8c485b58ba83,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.133: INFO: Pod "webserver-deployment-595b5b9587-bh4nl" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bh4nl webserver-deployment-595b5b9587- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-595b5b9587-bh4nl 58d781cb-5e96-411c-9ea5-aceae4deef7e 9497141 0 2020-02-20 00:06:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f5a43879-d1f1-4d15-9e64-1af3fa5d350f 0xc00328fc60 0xc00328fc61}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.134: INFO: Pod "webserver-deployment-595b5b9587-cj5hl" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cj5hl webserver-deployment-595b5b9587- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-595b5b9587-cj5hl bcd0f34d-d7d1-489d-b1b2-1627f45db062 9497182 0 2020-02-20 00:06:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f5a43879-d1f1-4d15-9e64-1af3fa5d350f 0xc00328fd77 0xc00328fd78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.134: INFO: Pod "webserver-deployment-595b5b9587-dlf7c" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dlf7c webserver-deployment-595b5b9587- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-595b5b9587-dlf7c 75f117f1-058b-43cb-81a0-e535742ae987 9497170 0 2020-02-20 00:06:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f5a43879-d1f1-4d15-9e64-1af3fa5d350f 0xc00328fe87 0xc00328fe88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.135: INFO: Pod "webserver-deployment-595b5b9587-dr6jh" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dr6jh webserver-deployment-595b5b9587- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-595b5b9587-dr6jh 3f5900e0-65df-4ae0-babb-ef4ff3491dc3 9497191 0 2020-02-20 00:06:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f5a43879-d1f1-4d15-9e64-1af3fa5d350f 0xc00328ffa7 0xc00328ffa8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-20 00:06:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.135: INFO: Pod "webserver-deployment-595b5b9587-fm7vq" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fm7vq webserver-deployment-595b5b9587- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-595b5b9587-fm7vq 498ac652-96e1-400f-9a4c-c5b0a5cd0261 9497021 0 2020-02-20 00:05:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f5a43879-d1f1-4d15-9e64-1af3fa5d350f 0xc00594a267 0xc00594a268}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:05:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:05:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-20 00:05:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-20 00:06:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://dd61cfeb8338bae3a925382bbf0386f32e1aaaaab17e095e25d3ae05154a4f9e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.136: INFO: Pod "webserver-deployment-595b5b9587-fwtv5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fwtv5 webserver-deployment-595b5b9587- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-595b5b9587-fwtv5 63e7ec7d-e7fc-4351-aab6-7c897062e7d6 9497184 0 2020-02-20 00:06:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f5a43879-d1f1-4d15-9e64-1af3fa5d350f 0xc00594a5e0 0xc00594a5e1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.136: INFO: Pod "webserver-deployment-595b5b9587-fwwv6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fwwv6 webserver-deployment-595b5b9587- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-595b5b9587-fwwv6 5fca1595-4b10-4cf8-9f39-09043c7b221e 9497183 0 2020-02-20 00:06:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f5a43879-d1f1-4d15-9e64-1af3fa5d350f 0xc00594a7b7 0xc00594a7b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.137: INFO: Pod "webserver-deployment-595b5b9587-hgvs9" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hgvs9 webserver-deployment-595b5b9587- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-595b5b9587-hgvs9 f2a76379-404b-4095-bca9-c0dd6835bdff 9497169 0 2020-02-20 00:06:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f5a43879-d1f1-4d15-9e64-1af3fa5d350f 0xc00594ad17 0xc00594ad18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.137: INFO: Pod "webserver-deployment-595b5b9587-mb7nd" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mb7nd webserver-deployment-595b5b9587- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-595b5b9587-mb7nd 2131c4e3-83d2-4959-961f-405d1fa32cba 9497171 0 2020-02-20 00:06:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f5a43879-d1f1-4d15-9e64-1af3fa5d350f 0xc00594b1b7 0xc00594b1b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.138: INFO: Pod "webserver-deployment-595b5b9587-ms86t" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ms86t webserver-deployment-595b5b9587- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-595b5b9587-ms86t 407e73f4-1065-4113-9493-54104cd7cea3 9497010 0 2020-02-20 00:05:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f5a43879-d1f1-4d15-9e64-1af3fa5d350f 0xc00594b467 0xc00594b468}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:05:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:05:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.4,StartTime:2020-02-20 00:05:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-20 00:06:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://b5b1e225e3a5b2aaaa4c60013e9a859f39fb59ad447e7ff71b4dde75d6f5d0bb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.139: INFO: Pod "webserver-deployment-595b5b9587-qb62g" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qb62g webserver-deployment-595b5b9587- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-595b5b9587-qb62g 1dc90c82-67ca-4406-bfbe-f6ea1d362f4e 9497043 0 2020-02-20 00:05:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f5a43879-d1f1-4d15-9e64-1af3fa5d350f 0xc00594b5d0 0xc00594b5d1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:05:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:05:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.4,StartTime:2020-02-20 00:05:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-20 00:06:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://a5efae3018c5bb8bcd7beff04e7af9cbe4ef334057e909cd6f8a270c2ffe6790,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.139: INFO: Pod "webserver-deployment-595b5b9587-rmrvt" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rmrvt webserver-deployment-595b5b9587- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-595b5b9587-rmrvt ad20b4e5-00f2-49c4-b25d-b64bcbeee41e 9497167 0 2020-02-20 00:06:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f5a43879-d1f1-4d15-9e64-1af3fa5d350f 0xc00594b740 0xc00594b741}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.140: INFO: Pod "webserver-deployment-595b5b9587-s5q69" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-s5q69 webserver-deployment-595b5b9587- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-595b5b9587-s5q69 4d6d7cec-ca44-415d-87e9-4618f93d5084 9497038 0 2020-02-20 00:05:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f5a43879-d1f1-4d15-9e64-1af3fa5d350f 0xc00594b857 0xc00594b858}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:05:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:05:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-20 00:05:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-20 00:06:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://45e761add67eb8ebbe5ef345cac81b5fae3d1c8c9b5cb9b1caa23ad02423bb2e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.140: INFO: Pod "webserver-deployment-595b5b9587-x2mjb" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-x2mjb webserver-deployment-595b5b9587- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-595b5b9587-x2mjb 557449b7-3631-473d-87fb-a5cb3d2748f5 9497035 0 2020-02-20 00:05:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f5a43879-d1f1-4d15-9e64-1af3fa5d350f 0xc00594b9f0 0xc00594b9f1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:05:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:05:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.6,StartTime:2020-02-20 00:05:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-20 00:06:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://fc559f700c3bc748b42af10eb5e285ce2950d90fe12b224a5e8665b90e93434a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.141: INFO: Pod "webserver-deployment-c7997dcc8-45hcl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-45hcl webserver-deployment-c7997dcc8- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-c7997dcc8-45hcl 6ae97e1e-8e05-4a04-87df-ccb5e91a8bcc 9497143 0 2020-02-20 00:06:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 895caeb0-43fe-41e7-bd66-c4c8bd44c33c 0xc00594bb60 0xc00594bb61}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.142: INFO: Pod "webserver-deployment-c7997dcc8-5fjw8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5fjw8 webserver-deployment-c7997dcc8- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-c7997dcc8-5fjw8 d45116d4-cb59-4c59-8ad1-408d98d73d50 9497168 0 2020-02-20 00:06:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 895caeb0-43fe-41e7-bd66-c4c8bd44c33c 0xc00594bc87 0xc00594bc88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.142: INFO: Pod "webserver-deployment-c7997dcc8-6m59g" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6m59g webserver-deployment-c7997dcc8- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-c7997dcc8-6m59g cea34769-4bc0-43bd-a511-48e8cacfc208 9497204 0 2020-02-20 00:06:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 895caeb0-43fe-41e7-bd66-c4c8bd44c33c 0xc00594bdb7 0xc00594bdb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-20 00:06:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.143: INFO: Pod "webserver-deployment-c7997dcc8-84zmh" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-84zmh webserver-deployment-c7997dcc8- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-c7997dcc8-84zmh 9fbba894-8074-41d2-a80a-d66e5a625b1c 9497084 0 2020-02-20 00:06:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 895caeb0-43fe-41e7-bd66-c4c8bd44c33c 0xc00594bf37 0xc00594bf38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-20 00:06:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.144: INFO: Pod "webserver-deployment-c7997dcc8-f888b" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-f888b webserver-deployment-c7997dcc8- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-c7997dcc8-f888b 7032c96d-ac58-4a3c-8d81-9e8c64a39edb 9497080 0 2020-02-20 00:06:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 895caeb0-43fe-41e7-bd66-c4c8bd44c33c 0xc0059760a7 0xc0059760a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-20 00:06:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.144: INFO: Pod "webserver-deployment-c7997dcc8-gczq8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gczq8 webserver-deployment-c7997dcc8- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-c7997dcc8-gczq8 4af1ec9e-8f43-4706-b2b0-812bd5a5fe8d 9497165 0 2020-02-20 00:06:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 895caeb0-43fe-41e7-bd66-c4c8bd44c33c 0xc005976227 0xc005976228}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-20 00:06:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.145: INFO: Pod "webserver-deployment-c7997dcc8-hmd72" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hmd72 webserver-deployment-c7997dcc8- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-c7997dcc8-hmd72 1e82c78d-bc42-45c9-a570-5982299b871b 9497202 0 2020-02-20 00:06:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 895caeb0-43fe-41e7-bd66-c4c8bd44c33c 0xc005976397 0xc005976398}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-20 00:06:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.146: INFO: Pod "webserver-deployment-c7997dcc8-k8n85" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-k8n85 webserver-deployment-c7997dcc8- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-c7997dcc8-k8n85 97829852-0aa5-409d-9090-53046b1635bc 9497105 0 2020-02-20 00:06:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 895caeb0-43fe-41e7-bd66-c4c8bd44c33c 0xc005976517 0xc005976518}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-20 00:06:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.146: INFO: Pod "webserver-deployment-c7997dcc8-nmsc7" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nmsc7 webserver-deployment-c7997dcc8- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-c7997dcc8-nmsc7 ee5e2fe2-1205-4344-826f-7bb46777b03a 9497199 0 2020-02-20 00:06:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 895caeb0-43fe-41e7-bd66-c4c8bd44c33c 0xc005976687 0xc005976688}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-20 00:06:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.147: INFO: Pod "webserver-deployment-c7997dcc8-rv6zg" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rv6zg webserver-deployment-c7997dcc8- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-c7997dcc8-rv6zg 551221cc-2f70-483c-8e79-043db406b4d8 9497106 0 2020-02-20 00:06:11 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 895caeb0-43fe-41e7-bd66-c4c8bd44c33c 0xc005976807 0xc005976808}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-20 00:06:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.147: INFO: Pod "webserver-deployment-c7997dcc8-s9hcp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-s9hcp webserver-deployment-c7997dcc8- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-c7997dcc8-s9hcp a4cdc62c-ff95-4771-bc65-71aa1dabc7b4 9497087 0 2020-02-20 00:06:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 895caeb0-43fe-41e7-bd66-c4c8bd44c33c 0xc005976987 0xc005976988}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-20 00:06:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.148: INFO: Pod "webserver-deployment-c7997dcc8-x7x9h" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-x7x9h webserver-deployment-c7997dcc8- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-c7997dcc8-x7x9h 214525ab-cecd-41f0-a374-c32c3d39b22b 9497194 0 2020-02-20 00:06:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 895caeb0-43fe-41e7-bd66-c4c8bd44c33c 0xc005976b37 0xc005976b38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-20 00:06:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 20 00:06:26.148: INFO: Pod "webserver-deployment-c7997dcc8-z7qrz" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-z7qrz webserver-deployment-c7997dcc8- deployment-4940 /api/v1/namespaces/deployment-4940/pods/webserver-deployment-c7997dcc8-z7qrz 429c0678-27db-4464-8974-58394fb954e9 9497207 0 2020-02-20 00:06:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 895caeb0-43fe-41e7-bd66-c4c8bd44c33c 0xc005976cb7 0xc005976cb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fdnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fdnj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fdnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:06:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-20 00:06:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:06:26.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4940" for this suite. • [SLOW TEST:47.161 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":280,"completed":67,"skipped":1030,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:06:28.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 20 00:06:32.317: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Pending, waiting for it to be Running (with Ready = true) Feb 20 00:06:34.356: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Pending, waiting for it to be Running (with Ready = true) Feb 20 00:06:37.724: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Pending, waiting for it to be Running (with Ready = true) Feb 20 00:06:38.603: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Pending, waiting for it to be Running (with Ready = true) Feb 20 00:06:40.856: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Pending, waiting for it to be Running (with Ready = true) Feb 20 00:06:44.313: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Pending, waiting for it to be Running (with Ready = true) Feb 20 00:06:45.249: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Pending, waiting for it to be Running (with Ready = true) Feb 20 00:06:46.575: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Pending, waiting for it to be Running (with Ready = true) Feb 20 00:06:50.885: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Pending, waiting for it to be Running (with Ready = true) Feb 20 00:06:53.391: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Pending, waiting for it to be Running (with Ready = true) Feb 20 00:06:55.964: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Pending, waiting for it to be Running (with Ready = true) Feb 20 00:06:57.072: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Pending, waiting for it to be Running (with Ready = true) Feb 20 00:06:59.080: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Pending, waiting for it to be Running (with Ready = true) Feb 20 00:07:01.031: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Pending, waiting for it to be Running (with Ready = true) Feb 20 00:07:02.595: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Pending, waiting for it to be Running (with Ready = true) Feb 20 00:07:05.462: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Pending, waiting for it to be Running (with Ready = true) Feb 20 00:07:06.512: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Pending, waiting for it to be Running (with Ready = true) Feb 20 00:07:10.162: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Pending, waiting for it to be Running (with Ready = true) Feb 20 00:07:11.622: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Pending, waiting for it to be Running (with Ready = true) Feb 20 00:07:13.101: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Pending, waiting for it to be Running (with Ready = true) Feb 20 00:07:15.013: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Pending, waiting for it to be Running (with Ready = true) Feb 20 00:07:16.327: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Running (Ready = false) Feb 20 00:07:18.536: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Running (Ready = false) Feb 20 00:07:20.621: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Running (Ready = false) Feb 20 00:07:24.123: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Running (Ready = false) Feb 20 00:07:24.802: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Running (Ready = false) Feb 20 00:07:26.356: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Running (Ready = false) Feb 20 00:07:29.548: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Running (Ready = false) Feb 20 00:07:30.389: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Running (Ready = false) Feb 20 00:07:32.349: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Running (Ready = false) Feb 20 00:07:34.607: INFO: The status of Pod test-webserver-151b10d7-9970-47fc-9d78-1ecb2e7f83f6 is Running (Ready = true) Feb 20 00:07:34.630: INFO: Container started at 2020-02-20 00:07:08 +0000 UTC, pod became ready at 2020-02-20 00:07:32 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:07:34.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7289" for this suite. • [SLOW TEST:66.626 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":280,"completed":68,"skipped":1038,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:07:34.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the initial replication controller Feb 20 00:07:35.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5086' Feb 20 00:07:36.328: INFO: stderr: "" Feb 20 00:07:36.329: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 20 00:07:36.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5086' Feb 20 00:07:37.452: INFO: stderr: "" Feb 20 00:07:37.452: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 Feb 20 00:07:42.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5086' Feb 20 00:07:42.689: INFO: stderr: "" Feb 20 00:07:42.689: INFO: stdout: "update-demo-nautilus-5frmr update-demo-nautilus-fvdcf " Feb 20 00:07:42.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5frmr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5086' Feb 20 00:07:42.906: INFO: stderr: "" Feb 20 00:07:42.906: INFO: stdout: "" Feb 20 00:07:42.906: INFO: update-demo-nautilus-5frmr is created but not running Feb 20 00:07:47.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5086' Feb 20 00:07:48.048: INFO: stderr: "" Feb 20 00:07:48.048: INFO: stdout: "update-demo-nautilus-5frmr update-demo-nautilus-fvdcf " Feb 20 00:07:48.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5frmr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5086' Feb 20 00:07:48.996: INFO: stderr: "" Feb 20 00:07:48.996: INFO: stdout: "" Feb 20 00:07:48.996: INFO: update-demo-nautilus-5frmr is created but not running Feb 20 00:07:53.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5086' Feb 20 00:07:54.130: INFO: stderr: "" Feb 20 00:07:54.131: INFO: stdout: "update-demo-nautilus-5frmr update-demo-nautilus-fvdcf " Feb 20 00:07:54.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5frmr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5086' Feb 20 00:07:54.272: INFO: stderr: "" Feb 20 00:07:54.272: INFO: stdout: "true" Feb 20 00:07:54.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5frmr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5086' Feb 20 00:07:54.357: INFO: stderr: "" Feb 20 00:07:54.357: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 20 00:07:54.357: INFO: validating pod update-demo-nautilus-5frmr Feb 20 00:07:54.368: INFO: got data: { "image": "nautilus.jpg" } Feb 20 00:07:54.368: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 20 00:07:54.368: INFO: update-demo-nautilus-5frmr is verified up and running Feb 20 00:07:54.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fvdcf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5086' Feb 20 00:07:54.458: INFO: stderr: "" Feb 20 00:07:54.458: INFO: stdout: "true" Feb 20 00:07:54.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fvdcf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5086' Feb 20 00:07:54.543: INFO: stderr: "" Feb 20 00:07:54.544: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 20 00:07:54.544: INFO: validating pod update-demo-nautilus-fvdcf Feb 20 00:07:54.577: INFO: got data: { "image": "nautilus.jpg" } Feb 20 00:07:54.577: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 20 00:07:54.577: INFO: update-demo-nautilus-fvdcf is verified up and running STEP: rolling-update to new replication controller Feb 20 00:07:54.582: INFO: scanned /root for discovery docs: Feb 20 00:07:54.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-5086' Feb 20 00:08:25.459: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 20 00:08:25.459: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 20 00:08:25.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5086' Feb 20 00:08:25.606: INFO: stderr: "" Feb 20 00:08:25.606: INFO: stdout: "update-demo-kitten-4d9sz update-demo-kitten-j2ncc " Feb 20 00:08:25.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4d9sz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5086' Feb 20 00:08:25.691: INFO: stderr: "" Feb 20 00:08:25.691: INFO: stdout: "true" Feb 20 00:08:25.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4d9sz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5086' Feb 20 00:08:25.789: INFO: stderr: "" Feb 20 00:08:25.790: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 20 00:08:25.790: INFO: validating pod update-demo-kitten-4d9sz Feb 20 00:08:25.800: INFO: got data: { "image": "kitten.jpg" } Feb 20 00:08:25.800: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 20 00:08:25.800: INFO: update-demo-kitten-4d9sz is verified up and running Feb 20 00:08:25.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-j2ncc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5086' Feb 20 00:08:25.944: INFO: stderr: "" Feb 20 00:08:25.944: INFO: stdout: "true" Feb 20 00:08:25.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-j2ncc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5086' Feb 20 00:08:26.184: INFO: stderr: "" Feb 20 00:08:26.184: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 20 00:08:26.184: INFO: validating pod update-demo-kitten-j2ncc Feb 20 00:08:26.190: INFO: got data: { "image": "kitten.jpg" } Feb 20 00:08:26.190: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 20 00:08:26.190: INFO: update-demo-kitten-j2ncc is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:08:26.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5086" for this suite. • [SLOW TEST:51.545 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":280,"completed":69,"skipped":1048,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:08:26.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0220 00:08:29.716150 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 20 00:08:29.716: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:08:29.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4195" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":280,"completed":70,"skipped":1072,"failed":0} SSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:08:29.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 20 00:08:30.258: INFO: Pod name rollover-pod: Found 0 pods out of 1 Feb 20 00:08:36.196: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 20 00:08:42.214: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Feb 20 00:08:44.220: INFO: Creating deployment "test-rollover-deployment" Feb 20 00:08:44.259: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Feb 20 00:08:46.272: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Feb 20 00:08:46.281: INFO: Ensure that both replica sets have 1 created replica Feb 20 00:08:46.288: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Feb 20 00:08:46.299: INFO: Updating deployment test-rollover-deployment Feb 20 00:08:46.299: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Feb 20 00:08:48.374: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Feb 20 00:08:48.400: INFO: Make sure deployment "test-rollover-deployment" is complete Feb 20 00:08:48.412: INFO: all replica sets need to contain the pod-template-hash label Feb 20 00:08:48.413: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754124, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754124, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754126, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754124, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 00:08:50.427: INFO: all replica sets need to contain the pod-template-hash label Feb 20 00:08:50.427: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754124, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754124, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754126, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754124, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 00:08:52.429: INFO: all replica sets need to contain the pod-template-hash label Feb 20 00:08:52.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754124, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754124, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754126, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754124, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 00:08:54.432: INFO: all replica sets need to contain the pod-template-hash label Feb 20 00:08:54.432: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754124, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754124, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754133, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754124, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 00:08:56.422: INFO: all replica sets need to contain the pod-template-hash label Feb 20 00:08:56.422: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754124, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754124, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754133, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754124, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 00:08:58.424: INFO: all replica sets need to contain the pod-template-hash label Feb 20 00:08:58.424: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754124, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754124, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754133, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754124, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 00:09:00.427: INFO: all replica sets need to contain the pod-template-hash label Feb 20 00:09:00.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754124, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754124, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754133, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754124, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 00:09:02.429: INFO: all replica sets need to contain the pod-template-hash label Feb 20 00:09:02.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754124, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754124, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754133, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754124, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 00:09:04.422: INFO: Feb 20 00:09:04.422: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Feb 20 00:09:04.432: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-6154 /apis/apps/v1/namespaces/deployment-6154/deployments/test-rollover-deployment 1a20adb0-3562-43e9-b6fb-5d4ecd1f635c 9498000 2 2020-02-20 00:08:44 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005976de8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-20 00:08:44 +0000 UTC,LastTransitionTime:2020-02-20 00:08:44 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-02-20 00:09:04 +0000 UTC,LastTransitionTime:2020-02-20 00:08:44 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Feb 20 00:09:04.436: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-6154 /apis/apps/v1/namespaces/deployment-6154/replicasets/test-rollover-deployment-574d6dfbff 4e8d7719-144d-4f5f-9daf-63c2ab0fc41c 9497989 2 2020-02-20 00:08:46 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 1a20adb0-3562-43e9-b6fb-5d4ecd1f635c 0xc005977257 0xc005977258}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0059772c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 20 00:09:04.436: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Feb 20 00:09:04.437: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-6154 /apis/apps/v1/namespaces/deployment-6154/replicasets/test-rollover-controller 6cc06b0f-a11b-43f3-bc50-85fa8f381422 9497998 2 2020-02-20 00:08:30 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 1a20adb0-3562-43e9-b6fb-5d4ecd1f635c 0xc005977187 0xc005977188}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0059771e8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 20 00:09:04.437: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-6154 /apis/apps/v1/namespaces/deployment-6154/replicasets/test-rollover-deployment-f6c94f66c 815cd8d3-7f1a-44f8-ab39-1f5db3d4ecb2 9497933 2 2020-02-20 00:08:44 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 1a20adb0-3562-43e9-b6fb-5d4ecd1f635c 0xc005977330 0xc005977331}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0059773a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 20 00:09:04.441: INFO: Pod "test-rollover-deployment-574d6dfbff-pmqjc" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-pmqjc test-rollover-deployment-574d6dfbff- deployment-6154 /api/v1/namespaces/deployment-6154/pods/test-rollover-deployment-574d6dfbff-pmqjc 0059ca2d-9290-48ab-9a7c-d378ff7a90ab 9497961 0 2020-02-20 00:08:46 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 4e8d7719-144d-4f5f-9daf-63c2ab0fc41c 0xc00594ba17 0xc00594ba18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-shrwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-shrwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-shrwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:08:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:08:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:08:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:08:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-20 00:08:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-20 00:08:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://b22e1cbe62ccf159f5144667651f76b64f0f4cd025fffda61a34ef2480f1fbcc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:09:04.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6154" for this suite. • [SLOW TEST:34.724 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":280,"completed":71,"skipped":1078,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:09:04.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-downwardapi-4xtd STEP: Creating a pod to test atomic-volume-subpath Feb 20 00:09:04.765: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-4xtd" in namespace "subpath-6300" to be "success or failure" Feb 20 00:09:04.787: INFO: Pod "pod-subpath-test-downwardapi-4xtd": Phase="Pending", Reason="", readiness=false. Elapsed: 21.328367ms Feb 20 00:09:06.796: INFO: Pod "pod-subpath-test-downwardapi-4xtd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030242101s Feb 20 00:09:08.803: INFO: Pod "pod-subpath-test-downwardapi-4xtd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037186265s Feb 20 00:09:10.810: INFO: Pod "pod-subpath-test-downwardapi-4xtd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044247248s Feb 20 00:09:12.816: INFO: Pod "pod-subpath-test-downwardapi-4xtd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051180545s Feb 20 00:09:14.827: INFO: Pod "pod-subpath-test-downwardapi-4xtd": Phase="Running", Reason="", readiness=true. Elapsed: 10.062004622s Feb 20 00:09:16.835: INFO: Pod "pod-subpath-test-downwardapi-4xtd": Phase="Running", Reason="", readiness=true. Elapsed: 12.07000961s Feb 20 00:09:18.847: INFO: Pod "pod-subpath-test-downwardapi-4xtd": Phase="Running", Reason="", readiness=true. Elapsed: 14.081611822s Feb 20 00:09:20.857: INFO: Pod "pod-subpath-test-downwardapi-4xtd": Phase="Running", Reason="", readiness=true. Elapsed: 16.091189852s Feb 20 00:09:22.880: INFO: Pod "pod-subpath-test-downwardapi-4xtd": Phase="Running", Reason="", readiness=true. Elapsed: 18.114887403s Feb 20 00:09:24.895: INFO: Pod "pod-subpath-test-downwardapi-4xtd": Phase="Running", Reason="", readiness=true. Elapsed: 20.129231969s Feb 20 00:09:26.903: INFO: Pod "pod-subpath-test-downwardapi-4xtd": Phase="Running", Reason="", readiness=true. Elapsed: 22.138092965s Feb 20 00:09:28.910: INFO: Pod "pod-subpath-test-downwardapi-4xtd": Phase="Running", Reason="", readiness=true. Elapsed: 24.14469086s Feb 20 00:09:30.918: INFO: Pod "pod-subpath-test-downwardapi-4xtd": Phase="Running", Reason="", readiness=true. Elapsed: 26.152257212s Feb 20 00:09:32.927: INFO: Pod "pod-subpath-test-downwardapi-4xtd": Phase="Running", Reason="", readiness=true. Elapsed: 28.161499399s Feb 20 00:09:34.937: INFO: Pod "pod-subpath-test-downwardapi-4xtd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.172007096s STEP: Saw pod success Feb 20 00:09:34.938: INFO: Pod "pod-subpath-test-downwardapi-4xtd" satisfied condition "success or failure" Feb 20 00:09:34.941: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-4xtd container test-container-subpath-downwardapi-4xtd: STEP: delete the pod Feb 20 00:09:35.005: INFO: Waiting for pod pod-subpath-test-downwardapi-4xtd to disappear Feb 20 00:09:35.067: INFO: Pod pod-subpath-test-downwardapi-4xtd no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-4xtd Feb 20 00:09:35.067: INFO: Deleting pod "pod-subpath-test-downwardapi-4xtd" in namespace "subpath-6300" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:09:35.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6300" for this suite. • [SLOW TEST:30.638 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":280,"completed":72,"skipped":1107,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:09:35.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test substitution in container's command Feb 20 00:09:35.328: INFO: Waiting up to 5m0s for pod "var-expansion-6061cd7a-e945-4e60-8ba1-0310398bb768" in namespace "var-expansion-7894" to be "success or failure" Feb 20 00:09:35.338: INFO: Pod "var-expansion-6061cd7a-e945-4e60-8ba1-0310398bb768": Phase="Pending", Reason="", readiness=false. Elapsed: 9.71745ms Feb 20 00:09:37.345: INFO: Pod "var-expansion-6061cd7a-e945-4e60-8ba1-0310398bb768": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017354845s Feb 20 00:09:39.352: INFO: Pod "var-expansion-6061cd7a-e945-4e60-8ba1-0310398bb768": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023869964s Feb 20 00:09:41.385: INFO: Pod "var-expansion-6061cd7a-e945-4e60-8ba1-0310398bb768": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056521807s Feb 20 00:09:43.396: INFO: Pod "var-expansion-6061cd7a-e945-4e60-8ba1-0310398bb768": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.067978577s STEP: Saw pod success Feb 20 00:09:43.396: INFO: Pod "var-expansion-6061cd7a-e945-4e60-8ba1-0310398bb768" satisfied condition "success or failure" Feb 20 00:09:43.400: INFO: Trying to get logs from node jerma-node pod var-expansion-6061cd7a-e945-4e60-8ba1-0310398bb768 container dapi-container: STEP: delete the pod Feb 20 00:09:43.501: INFO: Waiting for pod var-expansion-6061cd7a-e945-4e60-8ba1-0310398bb768 to disappear Feb 20 00:09:43.561: INFO: Pod var-expansion-6061cd7a-e945-4e60-8ba1-0310398bb768 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:09:43.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7894" for this suite. • [SLOW TEST:8.508 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":280,"completed":73,"skipped":1115,"failed":0} SSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:09:43.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap that has name configmap-test-emptyKey-1afef4df-7972-4e16-b302-505a163b4df9 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:09:43.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3721" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":280,"completed":74,"skipped":1121,"failed":0} ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:09:43.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating all guestbook components Feb 20 00:09:44.181: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Feb 20 00:09:44.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3556' Feb 20 00:09:44.643: INFO: stderr: "" Feb 20 00:09:44.643: INFO: stdout: "service/agnhost-slave created\n" Feb 20 00:09:44.643: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Feb 20 00:09:44.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3556' Feb 20 00:09:45.003: INFO: stderr: "" Feb 20 00:09:45.004: INFO: stdout: "service/agnhost-master created\n" Feb 20 00:09:45.004: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Feb 20 00:09:45.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3556' Feb 20 00:09:45.525: INFO: stderr: "" Feb 20 00:09:45.526: INFO: stdout: "service/frontend created\n" Feb 20 00:09:45.527: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Feb 20 00:09:45.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3556' Feb 20 00:09:46.062: INFO: stderr: "" Feb 20 00:09:46.062: INFO: stdout: "deployment.apps/frontend created\n" Feb 20 00:09:46.063: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 20 00:09:46.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3556' Feb 20 00:09:46.633: INFO: stderr: "" Feb 20 00:09:46.634: INFO: stdout: "deployment.apps/agnhost-master created\n" Feb 20 00:09:46.635: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 20 00:09:46.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3556' Feb 20 00:09:47.917: INFO: stderr: "" Feb 20 00:09:47.917: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Feb 20 00:09:47.917: INFO: Waiting for all frontend pods to be Running. Feb 20 00:10:12.971: INFO: Waiting for frontend to serve content. Feb 20 00:10:12.994: INFO: Trying to add a new entry to the guestbook. Feb 20 00:10:13.011: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:10:18.042: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:10:23.062: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:10:28.090: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:10:33.232: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:10:38.283: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:10:43.305: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:10:48.333: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:10:53.354: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:10:58.378: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:11:03.401: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:11:08.418: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:11:13.440: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:11:18.456: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:11:23.482: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:11:28.504: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:11:33.520: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:11:38.544: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:11:43.572: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:11:48.598: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:11:53.621: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:11:58.637: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:12:03.663: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:12:08.685: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:12:13.706: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:12:18.724: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:12:23.741: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:12:28.783: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:12:33.811: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:12:38.837: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:12:43.880: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:12:48.907: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:12:53.925: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:12:58.947: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:13:03.969: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:13:08.984: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Feb 20 00:13:13.986: FAIL: Cannot added new entry in 180 seconds. Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.validateGuestbookApp(0x551f740, 0xc000a47600, 0xc005976cb0, 0xc) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2339 +0x551 k8s.io/kubernetes/test/e2e/kubectl.glob..func2.7.2() /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:420 +0x165 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00207ca00) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:110 +0x30a k8s.io/kubernetes/test/e2e.TestE2E(0xc00207ca00) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b testing.tRunner(0xc00207ca00, 0x4c9f938) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 STEP: using delete to clean up resources Feb 20 00:13:13.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3556' Feb 20 00:13:16.621: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 20 00:13:16.621: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Feb 20 00:13:16.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3556' Feb 20 00:13:16.798: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 20 00:13:16.798: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Feb 20 00:13:16.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3556' Feb 20 00:13:16.973: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 20 00:13:16.974: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 20 00:13:16.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3556' Feb 20 00:13:17.115: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 20 00:13:17.115: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 20 00:13:17.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3556' Feb 20 00:13:17.265: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 20 00:13:17.265: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Feb 20 00:13:17.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3556' Feb 20 00:13:17.581: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 20 00:13:17.581: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "kubectl-3556". STEP: Found 37 events. Feb 20 00:13:17.593: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-master-74c46fb7d4-9hnrw: {default-scheduler } Scheduled: Successfully assigned kubectl-3556/agnhost-master-74c46fb7d4-9hnrw to jerma-node Feb 20 00:13:17.593: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-slave-774cfc759f-j2rpr: {default-scheduler } Scheduled: Successfully assigned kubectl-3556/agnhost-slave-774cfc759f-j2rpr to jerma-node Feb 20 00:13:17.593: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-slave-774cfc759f-mrkvj: {default-scheduler } Scheduled: Successfully assigned kubectl-3556/agnhost-slave-774cfc759f-mrkvj to jerma-server-mvvl6gufaqub Feb 20 00:13:17.593: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-4fwhv: {default-scheduler } Scheduled: Successfully assigned kubectl-3556/frontend-6c5f89d5d4-4fwhv to jerma-node Feb 20 00:13:17.593: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-9f428: {default-scheduler } Scheduled: Successfully assigned kubectl-3556/frontend-6c5f89d5d4-9f428 to jerma-node Feb 20 00:13:17.593: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-mfm7m: {default-scheduler } Scheduled: Successfully assigned kubectl-3556/frontend-6c5f89d5d4-mfm7m to jerma-server-mvvl6gufaqub Feb 20 00:13:17.593: INFO: At 2020-02-20 00:09:46 +0000 UTC - event for agnhost-master: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-master-74c46fb7d4 to 1 Feb 20 00:13:17.593: INFO: At 2020-02-20 00:09:46 +0000 UTC - event for agnhost-master-74c46fb7d4: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-master-74c46fb7d4-9hnrw Feb 20 00:13:17.593: INFO: At 2020-02-20 00:09:46 +0000 UTC - event for frontend: {deployment-controller } ScalingReplicaSet: Scaled up replica set frontend-6c5f89d5d4 to 3 Feb 20 00:13:17.593: INFO: At 2020-02-20 00:09:46 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-4fwhv Feb 20 00:13:17.593: INFO: At 2020-02-20 00:09:46 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-mfm7m Feb 20 00:13:17.593: INFO: At 2020-02-20 00:09:46 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-9f428 Feb 20 00:13:17.593: INFO: At 2020-02-20 00:09:47 +0000 UTC - event for agnhost-slave: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-slave-774cfc759f to 2 Feb 20 00:13:17.593: INFO: At 2020-02-20 00:09:48 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-mrkvj Feb 20 00:13:17.593: INFO: At 2020-02-20 00:09:48 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-j2rpr Feb 20 00:13:17.593: INFO: At 2020-02-20 00:09:55 +0000 UTC - event for frontend-6c5f89d5d4-9f428: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Feb 20 00:13:17.593: INFO: At 2020-02-20 00:09:55 +0000 UTC - event for frontend-6c5f89d5d4-mfm7m: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Feb 20 00:13:17.593: INFO: At 2020-02-20 00:09:58 +0000 UTC - event for agnhost-slave-774cfc759f-mrkvj: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Feb 20 00:13:17.593: INFO: At 2020-02-20 00:09:58 +0000 UTC - event for frontend-6c5f89d5d4-4fwhv: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Feb 20 00:13:17.593: INFO: At 2020-02-20 00:10:01 +0000 UTC - event for agnhost-master-74c46fb7d4-9hnrw: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Feb 20 00:13:17.593: INFO: At 2020-02-20 00:10:02 +0000 UTC - event for agnhost-slave-774cfc759f-mrkvj: {kubelet jerma-server-mvvl6gufaqub} Created: Created container slave Feb 20 00:13:17.593: INFO: At 2020-02-20 00:10:02 +0000 UTC - event for frontend-6c5f89d5d4-mfm7m: {kubelet jerma-server-mvvl6gufaqub} Created: Created container guestbook-frontend Feb 20 00:13:17.593: INFO: At 2020-02-20 00:10:03 +0000 UTC - event for agnhost-slave-774cfc759f-j2rpr: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Feb 20 00:13:17.593: INFO: At 2020-02-20 00:10:03 +0000 UTC - event for agnhost-slave-774cfc759f-mrkvj: {kubelet jerma-server-mvvl6gufaqub} Started: Started container slave Feb 20 00:13:17.593: INFO: At 2020-02-20 00:10:03 +0000 UTC - event for frontend-6c5f89d5d4-9f428: {kubelet jerma-node} Created: Created container guestbook-frontend Feb 20 00:13:17.593: INFO: At 2020-02-20 00:10:03 +0000 UTC - event for frontend-6c5f89d5d4-mfm7m: {kubelet jerma-server-mvvl6gufaqub} Started: Started container guestbook-frontend Feb 20 00:13:17.593: INFO: At 2020-02-20 00:10:05 +0000 UTC - event for frontend-6c5f89d5d4-4fwhv: {kubelet jerma-node} Created: Created container guestbook-frontend Feb 20 00:13:17.593: INFO: At 2020-02-20 00:10:06 +0000 UTC - event for agnhost-master-74c46fb7d4-9hnrw: {kubelet jerma-node} Created: Created container master Feb 20 00:13:17.593: INFO: At 2020-02-20 00:10:06 +0000 UTC - event for frontend-6c5f89d5d4-9f428: {kubelet jerma-node} Started: Started container guestbook-frontend Feb 20 00:13:17.593: INFO: At 2020-02-20 00:10:07 +0000 UTC - event for agnhost-master-74c46fb7d4-9hnrw: {kubelet jerma-node} Started: Started container master Feb 20 00:13:17.593: INFO: At 2020-02-20 00:10:07 +0000 UTC - event for agnhost-slave-774cfc759f-j2rpr: {kubelet jerma-node} Started: Started container slave Feb 20 00:13:17.593: INFO: At 2020-02-20 00:10:07 +0000 UTC - event for agnhost-slave-774cfc759f-j2rpr: {kubelet jerma-node} Created: Created container slave Feb 20 00:13:17.593: INFO: At 2020-02-20 00:10:07 +0000 UTC - event for frontend-6c5f89d5d4-4fwhv: {kubelet jerma-node} Started: Started container guestbook-frontend Feb 20 00:13:17.593: INFO: At 2020-02-20 00:13:17 +0000 UTC - event for agnhost-master-74c46fb7d4-9hnrw: {kubelet jerma-node} Killing: Stopping container master Feb 20 00:13:17.593: INFO: At 2020-02-20 00:13:17 +0000 UTC - event for frontend-6c5f89d5d4-4fwhv: {kubelet jerma-node} Killing: Stopping container guestbook-frontend Feb 20 00:13:17.593: INFO: At 2020-02-20 00:13:17 +0000 UTC - event for frontend-6c5f89d5d4-9f428: {kubelet jerma-node} Killing: Stopping container guestbook-frontend Feb 20 00:13:17.593: INFO: At 2020-02-20 00:13:17 +0000 UTC - event for frontend-6c5f89d5d4-mfm7m: {kubelet jerma-server-mvvl6gufaqub} Killing: Stopping container guestbook-frontend Feb 20 00:13:17.623: INFO: POD NODE PHASE GRACE CONDITIONS Feb 20 00:13:17.623: INFO: agnhost-master-74c46fb7d4-9hnrw jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 00:09:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 00:10:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 00:10:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 00:09:46 +0000 UTC }] Feb 20 00:13:17.623: INFO: agnhost-slave-774cfc759f-j2rpr jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 00:09:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 00:10:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 00:10:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 00:09:48 +0000 UTC }] Feb 20 00:13:17.623: INFO: agnhost-slave-774cfc759f-mrkvj jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 00:09:49 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 00:10:04 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 00:10:04 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 00:09:48 +0000 UTC }] Feb 20 00:13:17.623: INFO: frontend-6c5f89d5d4-4fwhv jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 00:09:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 00:10:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 00:10:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 00:09:46 +0000 UTC }] Feb 20 00:13:17.623: INFO: frontend-6c5f89d5d4-9f428 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 00:09:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 00:10:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 00:10:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 00:09:46 +0000 UTC }] Feb 20 00:13:17.623: INFO: frontend-6c5f89d5d4-mfm7m jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 00:09:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 00:10:04 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 00:10:04 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 00:09:46 +0000 UTC }] Feb 20 00:13:17.623: INFO: Feb 20 00:13:17.651: INFO: Logging node info for node jerma-node Feb 20 00:13:17.692: INFO: Node Info: &Node{ObjectMeta:{jerma-node /api/v1/nodes/jerma-node 6236bfb4-6b64-4c0a-82c6-f768ceeab07c 9498336 0 2020-01-04 11:59:52 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-node kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 12:00:49 +0000 UTC,LastTransitionTime:2020-01-04 12:00:49 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-02-20 00:10:08 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-02-20 00:10:08 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-02-20 00:10:08 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-02-20 00:10:08 +0000 UTC,LastTransitionTime:2020-01-04 12:00:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.2.250,},NodeAddress{Type:Hostname,Address:jerma-node,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bdc16344252549dd902c3a5d68b22f41,SystemUUID:BDC16344-2525-49DD-902C-3A5D68B22F41,BootID:eec61fc4-8bf6-487f-8f93-ea9731fe757a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest],SizeBytes:5496756,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 20 00:13:17.694: INFO: Logging kubelet events for node jerma-node Feb 20 00:13:17.702: INFO: Logging pods the kubelet thinks is on node jerma-node Feb 20 00:13:17.752: INFO: kube-proxy-dsf66 started at 2020-01-04 11:59:52 +0000 UTC (0+1 container statuses recorded) Feb 20 00:13:17.752: INFO: Container kube-proxy ready: true, restart count 0 Feb 20 00:13:17.752: INFO: frontend-6c5f89d5d4-9f428 started at 2020-02-20 00:09:46 +0000 UTC (0+1 container statuses recorded) Feb 20 00:13:17.752: INFO: Container guestbook-frontend ready: true, restart count 0 Feb 20 00:13:17.752: INFO: frontend-6c5f89d5d4-4fwhv started at 2020-02-20 00:09:46 +0000 UTC (0+1 container statuses recorded) Feb 20 00:13:17.752: INFO: Container guestbook-frontend ready: true, restart count 0 Feb 20 00:13:17.752: INFO: weave-net-kz8lv started at 2020-01-04 11:59:52 +0000 UTC (0+2 container statuses recorded) Feb 20 00:13:17.752: INFO: Container weave ready: true, restart count 1 Feb 20 00:13:17.752: INFO: Container weave-npc ready: true, restart count 0 Feb 20 00:13:17.752: INFO: agnhost-master-74c46fb7d4-9hnrw started at 2020-02-20 00:09:47 +0000 UTC (0+1 container statuses recorded) Feb 20 00:13:17.752: INFO: Container master ready: true, restart count 0 Feb 20 00:13:17.752: INFO: agnhost-slave-774cfc759f-j2rpr started at 2020-02-20 00:09:48 +0000 UTC (0+1 container statuses recorded) Feb 20 00:13:17.752: INFO: Container slave ready: true, restart count 0 W0220 00:13:17.764915 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 20 00:13:17.856: INFO: Latency metrics for node jerma-node Feb 20 00:13:17.857: INFO: Logging node info for node jerma-server-mvvl6gufaqub Feb 20 00:13:18.870: INFO: Node Info: &Node{ObjectMeta:{jerma-server-mvvl6gufaqub /api/v1/nodes/jerma-server-mvvl6gufaqub a2a7fe9b-7d59-43f1-bbe3-2a69f99cabd2 9498582 0 2020-01-04 11:47:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-server-mvvl6gufaqub kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 11:48:36 +0000 UTC,LastTransitionTime:2020-01-04 11:48:36 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-02-20 00:11:56 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-02-20 00:11:56 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-02-20 00:11:56 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-02-20 00:11:56 +0000 UTC,LastTransitionTime:2020-01-04 11:48:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.1.234,},NodeAddress{Type:Hostname,Address:jerma-server-mvvl6gufaqub,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3f0346566ad342efb0c9f55677d0a8ea,SystemUUID:3F034656-6AD3-42EF-B0C9-F55677D0A8EA,BootID:87dae5d0-e99d-4d31-a4e7-fbd07d84e951,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[ollivier/functest-kubernetes-security@sha256:e07875af6d375759fd233dc464382bb51d2464f6ae50a60625e41226eb1f87be ollivier/functest-kubernetes-security:latest],SizeBytes:1118568659,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e3ec33d533257902ad9ebe3d399c17710e62009201a7202aec941e351545d662 k8s.gcr.io/kube-apiserver:v1.17.0],SizeBytes:170957331,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:0438efb5098a2ca634ea8c6b0d804742b733d0d13fd53cf62c73e32c659a3c39 k8s.gcr.io/kube-controller-manager:v1.17.0],SizeBytes:160877075,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:5215c4216a65f7e76c1895ba951a12dc1c947904a91810fc66a544ff1d7e87db k8s.gcr.io/kube-scheduler:v1.17.0],SizeBytes:94431763,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 20 00:13:18.872: INFO: Logging kubelet events for node jerma-server-mvvl6gufaqub Feb 20 00:13:18.883: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub Feb 20 00:13:18.907: INFO: coredns-6955765f44-bhnn4 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded) Feb 20 00:13:18.907: INFO: Container coredns ready: true, restart count 0 Feb 20 00:13:18.907: INFO: coredns-6955765f44-bwd85 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded) Feb 20 00:13:18.907: INFO: Container coredns ready: true, restart count 0 Feb 20 00:13:18.907: INFO: agnhost-slave-774cfc759f-mrkvj started at 2020-02-20 00:09:49 +0000 UTC (0+1 container statuses recorded) Feb 20 00:13:18.907: INFO: Container slave ready: true, restart count 0 Feb 20 00:13:18.907: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded) Feb 20 00:13:18.907: INFO: Container kube-controller-manager ready: true, restart count 14 Feb 20 00:13:18.907: INFO: kube-proxy-chkps started at 2020-01-04 11:48:11 +0000 UTC (0+1 container statuses recorded) Feb 20 00:13:18.907: INFO: Container kube-proxy ready: true, restart count 0 Feb 20 00:13:18.907: INFO: weave-net-z6tjf started at 2020-01-04 11:48:11 +0000 UTC (0+2 container statuses recorded) Feb 20 00:13:18.907: INFO: Container weave ready: true, restart count 0 Feb 20 00:13:18.907: INFO: Container weave-npc ready: true, restart count 0 Feb 20 00:13:18.907: INFO: frontend-6c5f89d5d4-mfm7m started at 2020-02-20 00:09:46 +0000 UTC (0+1 container statuses recorded) Feb 20 00:13:18.907: INFO: Container guestbook-frontend ready: true, restart count 0 Feb 20 00:13:18.907: INFO: kube-scheduler-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded) Feb 20 00:13:18.907: INFO: Container kube-scheduler ready: true, restart count 18 Feb 20 00:13:18.907: INFO: kube-apiserver-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded) Feb 20 00:13:18.907: INFO: Container kube-apiserver ready: true, restart count 1 Feb 20 00:13:18.907: INFO: etcd-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded) Feb 20 00:13:18.907: INFO: Container etcd ready: true, restart count 1 W0220 00:13:18.913648 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 20 00:13:18.957: INFO: Latency metrics for node jerma-server-mvvl6gufaqub Feb 20 00:13:18.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3556" for this suite. • Failure [215.611 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:388 should create and stop a working application [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 20 00:13:13.986: Cannot added new entry in 180 seconds. /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2339 ------------------------------ {"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":280,"completed":74,"skipped":1121,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:13:19.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 20 00:13:20.037: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Feb 20 00:13:20.105: INFO: Pod name sample-pod: Found 0 pods out of 1 Feb 20 00:13:25.394: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 20 00:13:33.428: INFO: Creating deployment "test-rolling-update-deployment" Feb 20 00:13:33.439: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Feb 20 00:13:33.449: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Feb 20 00:13:35.459: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Feb 20 00:13:35.462: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754413, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754413, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754413, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754413, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 00:13:37.469: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754413, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754413, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754413, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754413, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 00:13:39.468: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754413, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754413, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754413, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754413, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 00:13:41.468: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Feb 20 00:13:41.478: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-3449 /apis/apps/v1/namespaces/deployment-3449/deployments/test-rolling-update-deployment af6e7576-c576-4976-87e8-e30cfb756a2b 9498952 1 2020-02-20 00:13:33 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001ac0838 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-20 00:13:33 +0000 UTC,LastTransitionTime:2020-02-20 00:13:33 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-02-20 00:13:39 +0000 UTC,LastTransitionTime:2020-02-20 00:13:33 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Feb 20 00:13:41.483: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-3449 /apis/apps/v1/namespaces/deployment-3449/replicasets/test-rolling-update-deployment-67cf4f6444 2d69ae50-9365-435f-abe9-058944cf12ca 9498940 1 2020-02-20 00:13:33 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment af6e7576-c576-4976-87e8-e30cfb756a2b 0xc00497c727 0xc00497c728}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00497c798 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 20 00:13:41.483: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Feb 20 00:13:41.483: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-3449 /apis/apps/v1/namespaces/deployment-3449/replicasets/test-rolling-update-controller 123fac47-0238-4493-8709-5af11e09f4cd 9498950 2 2020-02-20 00:13:20 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment af6e7576-c576-4976-87e8-e30cfb756a2b 0xc00497c657 0xc00497c658}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00497c6b8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 20 00:13:41.489: INFO: Pod "test-rolling-update-deployment-67cf4f6444-ckvjf" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-ckvjf test-rolling-update-deployment-67cf4f6444- deployment-3449 /api/v1/namespaces/deployment-3449/pods/test-rolling-update-deployment-67cf4f6444-ckvjf b46ba276-ba6c-4b42-b320-0241a2c054d7 9498939 0 2020-02-20 00:13:33 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 2d69ae50-9365-435f-abe9-058944cf12ca 0xc00497cbd7 0xc00497cbd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ljtf5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ljtf5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ljtf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:13:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:13:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:13:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:13:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-20 00:13:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-20 00:13:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://763f85406f51025952a304e88fa27f4e1ae6b56a8db6bed5f6d41568b18d19ab,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:13:41.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3449" for this suite. • [SLOW TEST:21.938 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":280,"completed":75,"skipped":1121,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:13:41.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 20 00:13:42.221: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 20 00:13:44.242: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754422, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754422, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754422, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754422, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 00:13:46.284: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754422, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754422, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754422, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754422, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 00:13:48.346: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754422, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754422, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754422, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754422, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 00:13:50.251: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754422, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754422, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754422, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754422, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 20 00:13:53.303: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 20 00:13:53.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6063-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:13:54.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4703" for this suite. STEP: Destroying namespace "webhook-4703-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:13.426 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":280,"completed":76,"skipped":1132,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:13:54.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating service multi-endpoint-test in namespace services-2619 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2619 to expose endpoints map[] Feb 20 00:13:55.056: INFO: Get endpoints failed (7.307814ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Feb 20 00:13:56.073: INFO: successfully validated that service multi-endpoint-test in namespace services-2619 exposes endpoints map[] (1.023546523s elapsed) STEP: Creating pod pod1 in namespace services-2619 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2619 to expose endpoints map[pod1:[100]] Feb 20 00:14:00.420: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.331545126s elapsed, will retry) Feb 20 00:14:04.476: INFO: successfully validated that service multi-endpoint-test in namespace services-2619 exposes endpoints map[pod1:[100]] (8.387512274s elapsed) STEP: Creating pod pod2 in namespace services-2619 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2619 to expose endpoints map[pod1:[100] pod2:[101]] Feb 20 00:14:08.641: INFO: Unexpected endpoints: found map[ca186c3d-cb09-45ed-b756-83cce91999da:[100]], expected map[pod1:[100] pod2:[101]] (4.158883282s elapsed, will retry) Feb 20 00:14:11.845: INFO: successfully validated that service multi-endpoint-test in namespace services-2619 exposes endpoints map[pod1:[100] pod2:[101]] (7.362856522s elapsed) STEP: Deleting pod pod1 in namespace services-2619 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2619 to expose endpoints map[pod2:[101]] Feb 20 00:14:11.944: INFO: successfully validated that service multi-endpoint-test in namespace services-2619 exposes endpoints map[pod2:[101]] (60.576829ms elapsed) STEP: Deleting pod pod2 in namespace services-2619 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2619 to expose endpoints map[] Feb 20 00:14:12.025: INFO: successfully validated that service multi-endpoint-test in namespace services-2619 exposes endpoints map[] (12.808555ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:14:12.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2619" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:17.162 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":280,"completed":77,"skipped":1141,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:14:12.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-2246 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 20 00:14:12.279: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Feb 20 00:14:12.496: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 20 00:14:14.669: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 20 00:14:16.507: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 20 00:14:18.747: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 20 00:14:20.507: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 20 00:14:22.673: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 20 00:14:24.513: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 20 00:14:26.511: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 20 00:14:28.508: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 20 00:14:30.506: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 20 00:14:32.507: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 20 00:14:34.505: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 20 00:14:36.506: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 20 00:14:38.509: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 20 00:14:40.505: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 20 00:14:42.505: INFO: The status of Pod netserver-0 is Running (Ready = true) Feb 20 00:14:42.511: INFO: The status of Pod netserver-1 is Running (Ready = false) Feb 20 00:14:44.519: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Feb 20 00:14:54.579: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2246 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 00:14:54.579: INFO: >>> kubeConfig: /root/.kube/config I0220 00:14:54.649170 9 log.go:172] (0xc00287d970) (0xc001ac6fa0) Create stream I0220 00:14:54.649356 9 log.go:172] (0xc00287d970) (0xc001ac6fa0) Stream added, broadcasting: 1 I0220 00:14:54.659436 9 log.go:172] (0xc00287d970) Reply frame received for 1 I0220 00:14:54.659582 9 log.go:172] (0xc00287d970) (0xc000e27220) Create stream I0220 00:14:54.659602 9 log.go:172] (0xc00287d970) (0xc000e27220) Stream added, broadcasting: 3 I0220 00:14:54.661342 9 log.go:172] (0xc00287d970) Reply frame received for 3 I0220 00:14:54.661396 9 log.go:172] (0xc00287d970) (0xc001c45400) Create stream I0220 00:14:54.661426 9 log.go:172] (0xc00287d970) (0xc001c45400) Stream added, broadcasting: 5 I0220 00:14:54.668279 9 log.go:172] (0xc00287d970) Reply frame received for 5 I0220 00:14:55.785100 9 log.go:172] (0xc00287d970) Data frame received for 3 I0220 00:14:55.785231 9 log.go:172] (0xc000e27220) (3) Data frame handling I0220 00:14:55.785275 9 log.go:172] (0xc000e27220) (3) Data frame sent I0220 00:14:55.887529 9 log.go:172] (0xc00287d970) (0xc000e27220) Stream removed, broadcasting: 3 I0220 00:14:55.887804 9 log.go:172] (0xc00287d970) (0xc001c45400) Stream removed, broadcasting: 5 I0220 00:14:55.887902 9 log.go:172] (0xc00287d970) Data frame received for 1 I0220 00:14:55.887950 9 log.go:172] (0xc001ac6fa0) (1) Data frame handling I0220 00:14:55.888036 9 log.go:172] (0xc001ac6fa0) (1) Data frame sent I0220 00:14:55.888065 9 log.go:172] (0xc00287d970) (0xc001ac6fa0) Stream removed, broadcasting: 1 I0220 00:14:55.888395 9 log.go:172] (0xc00287d970) (0xc001ac6fa0) Stream removed, broadcasting: 1 I0220 00:14:55.888424 9 log.go:172] (0xc00287d970) (0xc000e27220) Stream removed, broadcasting: 3 I0220 00:14:55.888443 9 log.go:172] (0xc00287d970) (0xc001c45400) Stream removed, broadcasting: 5 Feb 20 00:14:55.888: INFO: Found all expected endpoints: [netserver-0] Feb 20 00:14:55.894: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2246 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 00:14:55.894: INFO: >>> kubeConfig: /root/.kube/config I0220 00:14:55.951423 9 log.go:172] (0xc001a6c2c0) (0xc000d46dc0) Create stream I0220 00:14:55.951573 9 log.go:172] (0xc001a6c2c0) (0xc000d46dc0) Stream added, broadcasting: 1 I0220 00:14:55.961998 9 log.go:172] (0xc001a6c2c0) Reply frame received for 1 I0220 00:14:55.962350 9 log.go:172] (0xc001a6c2c0) (0xc000e27360) Create stream I0220 00:14:55.962388 9 log.go:172] (0xc001a6c2c0) (0xc000e27360) Stream added, broadcasting: 3 I0220 00:14:55.965183 9 log.go:172] (0xc001a6c2c0) Reply frame received for 3 I0220 00:14:55.965298 9 log.go:172] (0xc001a6c2c0) (0xc000d46f00) Create stream I0220 00:14:55.965332 9 log.go:172] (0xc001a6c2c0) (0xc000d46f00) Stream added, broadcasting: 5 I0220 00:14:55.968161 9 log.go:172] (0xc001a6c2c0) Reply frame received for 5 I0220 00:14:57.057224 9 log.go:172] (0xc001a6c2c0) Data frame received for 3 I0220 00:14:57.057424 9 log.go:172] (0xc000e27360) (3) Data frame handling I0220 00:14:57.057474 9 log.go:172] (0xc000e27360) (3) Data frame sent I0220 00:14:57.198692 9 log.go:172] (0xc001a6c2c0) Data frame received for 1 I0220 00:14:57.198812 9 log.go:172] (0xc001a6c2c0) (0xc000e27360) Stream removed, broadcasting: 3 I0220 00:14:57.198867 9 log.go:172] (0xc000d46dc0) (1) Data frame handling I0220 00:14:57.198900 9 log.go:172] (0xc000d46dc0) (1) Data frame sent I0220 00:14:57.198924 9 log.go:172] (0xc001a6c2c0) (0xc000d46f00) Stream removed, broadcasting: 5 I0220 00:14:57.198961 9 log.go:172] (0xc001a6c2c0) (0xc000d46dc0) Stream removed, broadcasting: 1 I0220 00:14:57.198978 9 log.go:172] (0xc001a6c2c0) Go away received I0220 00:14:57.199199 9 log.go:172] (0xc001a6c2c0) (0xc000d46dc0) Stream removed, broadcasting: 1 I0220 00:14:57.199280 9 log.go:172] (0xc001a6c2c0) (0xc000e27360) Stream removed, broadcasting: 3 I0220 00:14:57.199300 9 log.go:172] (0xc001a6c2c0) (0xc000d46f00) Stream removed, broadcasting: 5 Feb 20 00:14:57.199: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:14:57.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2246" for this suite. • [SLOW TEST:45.142 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":78,"skipped":1158,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:14:57.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 20 00:14:57.316: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b0f61f9-2ede-4edd-9bfe-2f301da831e8" in namespace "projected-700" to be "success or failure" Feb 20 00:14:57.323: INFO: Pod "downwardapi-volume-7b0f61f9-2ede-4edd-9bfe-2f301da831e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.9913ms Feb 20 00:14:59.332: INFO: Pod "downwardapi-volume-7b0f61f9-2ede-4edd-9bfe-2f301da831e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016068873s Feb 20 00:15:01.340: INFO: Pod "downwardapi-volume-7b0f61f9-2ede-4edd-9bfe-2f301da831e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023740129s Feb 20 00:15:04.539: INFO: Pod "downwardapi-volume-7b0f61f9-2ede-4edd-9bfe-2f301da831e8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.223033009s Feb 20 00:15:07.044: INFO: Pod "downwardapi-volume-7b0f61f9-2ede-4edd-9bfe-2f301da831e8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.72826618s Feb 20 00:15:09.052: INFO: Pod "downwardapi-volume-7b0f61f9-2ede-4edd-9bfe-2f301da831e8": Phase="Pending", Reason="", readiness=false. Elapsed: 11.735940474s Feb 20 00:15:11.064: INFO: Pod "downwardapi-volume-7b0f61f9-2ede-4edd-9bfe-2f301da831e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.748449663s STEP: Saw pod success Feb 20 00:15:11.065: INFO: Pod "downwardapi-volume-7b0f61f9-2ede-4edd-9bfe-2f301da831e8" satisfied condition "success or failure" Feb 20 00:15:11.069: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-7b0f61f9-2ede-4edd-9bfe-2f301da831e8 container client-container: STEP: delete the pod Feb 20 00:15:11.116: INFO: Waiting for pod downwardapi-volume-7b0f61f9-2ede-4edd-9bfe-2f301da831e8 to disappear Feb 20 00:15:11.118: INFO: Pod downwardapi-volume-7b0f61f9-2ede-4edd-9bfe-2f301da831e8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:15:11.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-700" for this suite. • [SLOW TEST:13.903 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":79,"skipped":1161,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:15:11.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1911 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1911;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1911 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1911;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1911.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1911.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1911.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1911.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1911.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1911.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1911.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1911.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1911.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1911.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1911.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1911.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1911.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 68.12.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.12.68_udp@PTR;check="$$(dig +tcp +noall +answer +search 68.12.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.12.68_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1911 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1911;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1911 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1911;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1911.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1911.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1911.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1911.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1911.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1911.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1911.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1911.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1911.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1911.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1911.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1911.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1911.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 68.12.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.12.68_udp@PTR;check="$$(dig +tcp +noall +answer +search 68.12.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.12.68_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 20 00:15:23.406: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:23.412: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:23.417: INFO: Unable to read wheezy_udp@dns-test-service.dns-1911 from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:23.424: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1911 from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:23.432: INFO: Unable to read wheezy_udp@dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:23.437: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:23.443: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:23.451: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:23.486: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:23.491: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:23.496: INFO: Unable to read jessie_udp@dns-test-service.dns-1911 from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:23.506: INFO: Unable to read jessie_tcp@dns-test-service.dns-1911 from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:23.512: INFO: Unable to read jessie_udp@dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:23.518: INFO: Unable to read jessie_tcp@dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:23.524: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:23.531: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:23.563: INFO: Lookups using dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1911 wheezy_tcp@dns-test-service.dns-1911 wheezy_udp@dns-test-service.dns-1911.svc wheezy_tcp@dns-test-service.dns-1911.svc wheezy_udp@_http._tcp.dns-test-service.dns-1911.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1911.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1911 jessie_tcp@dns-test-service.dns-1911 jessie_udp@dns-test-service.dns-1911.svc jessie_tcp@dns-test-service.dns-1911.svc jessie_udp@_http._tcp.dns-test-service.dns-1911.svc jessie_tcp@_http._tcp.dns-test-service.dns-1911.svc] Feb 20 00:15:28.575: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:28.581: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:28.592: INFO: Unable to read wheezy_udp@dns-test-service.dns-1911 from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:28.598: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1911 from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:28.603: INFO: Unable to read wheezy_udp@dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:28.608: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:28.612: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:28.616: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:28.656: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:28.663: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:28.668: INFO: Unable to read jessie_udp@dns-test-service.dns-1911 from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:28.671: INFO: Unable to read jessie_tcp@dns-test-service.dns-1911 from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:28.676: INFO: Unable to read jessie_udp@dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:28.679: INFO: Unable to read jessie_tcp@dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:28.683: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:28.687: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:28.726: INFO: Lookups using dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1911 wheezy_tcp@dns-test-service.dns-1911 wheezy_udp@dns-test-service.dns-1911.svc wheezy_tcp@dns-test-service.dns-1911.svc wheezy_udp@_http._tcp.dns-test-service.dns-1911.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1911.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1911 jessie_tcp@dns-test-service.dns-1911 jessie_udp@dns-test-service.dns-1911.svc jessie_tcp@dns-test-service.dns-1911.svc jessie_udp@_http._tcp.dns-test-service.dns-1911.svc jessie_tcp@_http._tcp.dns-test-service.dns-1911.svc] Feb 20 00:15:33.578: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:33.591: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:33.600: INFO: Unable to read wheezy_udp@dns-test-service.dns-1911 from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:33.606: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1911 from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:33.610: INFO: Unable to read wheezy_udp@dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:33.615: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:33.624: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:33.634: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:33.670: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:33.673: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:33.677: INFO: Unable to read jessie_udp@dns-test-service.dns-1911 from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:33.679: INFO: Unable to read jessie_tcp@dns-test-service.dns-1911 from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:33.685: INFO: Unable to read jessie_udp@dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:33.688: INFO: Unable to read jessie_tcp@dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:33.693: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:33.700: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:33.731: INFO: Lookups using dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1911 wheezy_tcp@dns-test-service.dns-1911 wheezy_udp@dns-test-service.dns-1911.svc wheezy_tcp@dns-test-service.dns-1911.svc wheezy_udp@_http._tcp.dns-test-service.dns-1911.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1911.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1911 jessie_tcp@dns-test-service.dns-1911 jessie_udp@dns-test-service.dns-1911.svc jessie_tcp@dns-test-service.dns-1911.svc jessie_udp@_http._tcp.dns-test-service.dns-1911.svc jessie_tcp@_http._tcp.dns-test-service.dns-1911.svc] Feb 20 00:15:38.589: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:38.625: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:38.633: INFO: Unable to read wheezy_udp@dns-test-service.dns-1911 from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:38.640: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1911 from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:38.646: INFO: Unable to read wheezy_udp@dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:38.651: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:38.662: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:38.677: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:38.718: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:38.723: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:38.731: INFO: Unable to read jessie_udp@dns-test-service.dns-1911 from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:38.736: INFO: Unable to read jessie_tcp@dns-test-service.dns-1911 from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:38.741: INFO: Unable to read jessie_udp@dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:38.746: INFO: Unable to read jessie_tcp@dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:38.752: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:38.760: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:38.787: INFO: Lookups using dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1911 wheezy_tcp@dns-test-service.dns-1911 wheezy_udp@dns-test-service.dns-1911.svc wheezy_tcp@dns-test-service.dns-1911.svc wheezy_udp@_http._tcp.dns-test-service.dns-1911.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1911.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1911 jessie_tcp@dns-test-service.dns-1911 jessie_udp@dns-test-service.dns-1911.svc jessie_tcp@dns-test-service.dns-1911.svc jessie_udp@_http._tcp.dns-test-service.dns-1911.svc jessie_tcp@_http._tcp.dns-test-service.dns-1911.svc] Feb 20 00:15:43.572: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:43.577: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:43.583: INFO: Unable to read wheezy_udp@dns-test-service.dns-1911 from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:43.588: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1911 from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:43.593: INFO: Unable to read wheezy_udp@dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:43.598: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:43.604: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:43.609: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:43.640: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:43.644: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:43.647: INFO: Unable to read jessie_udp@dns-test-service.dns-1911 from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:43.651: INFO: Unable to read jessie_tcp@dns-test-service.dns-1911 from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:43.657: INFO: Unable to read jessie_udp@dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:43.661: INFO: Unable to read jessie_tcp@dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:43.665: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:43.668: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:43.691: INFO: Lookups using dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1911 wheezy_tcp@dns-test-service.dns-1911 wheezy_udp@dns-test-service.dns-1911.svc wheezy_tcp@dns-test-service.dns-1911.svc wheezy_udp@_http._tcp.dns-test-service.dns-1911.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1911.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1911 jessie_tcp@dns-test-service.dns-1911 jessie_udp@dns-test-service.dns-1911.svc jessie_tcp@dns-test-service.dns-1911.svc jessie_udp@_http._tcp.dns-test-service.dns-1911.svc jessie_tcp@_http._tcp.dns-test-service.dns-1911.svc] Feb 20 00:15:48.573: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:48.579: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:48.583: INFO: Unable to read wheezy_udp@dns-test-service.dns-1911 from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:48.587: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1911 from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:48.591: INFO: Unable to read wheezy_udp@dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:48.596: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:48.600: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:48.604: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:48.654: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:48.660: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:48.666: INFO: Unable to read jessie_udp@dns-test-service.dns-1911 from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:48.676: INFO: Unable to read jessie_tcp@dns-test-service.dns-1911 from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:48.684: INFO: Unable to read jessie_udp@dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:48.688: INFO: Unable to read jessie_tcp@dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:48.691: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:48.694: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1911.svc from pod dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088: the server could not find the requested resource (get pods dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088) Feb 20 00:15:48.711: INFO: Lookups using dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1911 wheezy_tcp@dns-test-service.dns-1911 wheezy_udp@dns-test-service.dns-1911.svc wheezy_tcp@dns-test-service.dns-1911.svc wheezy_udp@_http._tcp.dns-test-service.dns-1911.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1911.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1911 jessie_tcp@dns-test-service.dns-1911 jessie_udp@dns-test-service.dns-1911.svc jessie_tcp@dns-test-service.dns-1911.svc jessie_udp@_http._tcp.dns-test-service.dns-1911.svc jessie_tcp@_http._tcp.dns-test-service.dns-1911.svc] Feb 20 00:15:53.719: INFO: DNS probes using dns-1911/dns-test-3e5fcce0-4634-4a42-b538-6d8d53bfa088 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:15:54.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1911" for this suite. • [SLOW TEST:43.068 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":280,"completed":80,"skipped":1190,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:15:54.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 20 00:15:54.402: INFO: Waiting up to 5m0s for pod "pod-4d1808ec-db31-4565-8d7d-c1993a3da5a5" in namespace "emptydir-2809" to be "success or failure" Feb 20 00:15:54.426: INFO: Pod "pod-4d1808ec-db31-4565-8d7d-c1993a3da5a5": Phase="Pending", Reason="", readiness=false. Elapsed: 23.21599ms Feb 20 00:15:56.433: INFO: Pod "pod-4d1808ec-db31-4565-8d7d-c1993a3da5a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030674555s Feb 20 00:15:58.440: INFO: Pod "pod-4d1808ec-db31-4565-8d7d-c1993a3da5a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037232701s Feb 20 00:16:00.448: INFO: Pod "pod-4d1808ec-db31-4565-8d7d-c1993a3da5a5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045680089s Feb 20 00:16:02.455: INFO: Pod "pod-4d1808ec-db31-4565-8d7d-c1993a3da5a5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05258687s Feb 20 00:16:04.464: INFO: Pod "pod-4d1808ec-db31-4565-8d7d-c1993a3da5a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.062103805s STEP: Saw pod success Feb 20 00:16:04.465: INFO: Pod "pod-4d1808ec-db31-4565-8d7d-c1993a3da5a5" satisfied condition "success or failure" Feb 20 00:16:04.469: INFO: Trying to get logs from node jerma-node pod pod-4d1808ec-db31-4565-8d7d-c1993a3da5a5 container test-container: STEP: delete the pod Feb 20 00:16:04.516: INFO: Waiting for pod pod-4d1808ec-db31-4565-8d7d-c1993a3da5a5 to disappear Feb 20 00:16:04.544: INFO: Pod pod-4d1808ec-db31-4565-8d7d-c1993a3da5a5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:16:04.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2809" for this suite. • [SLOW TEST:10.349 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":81,"skipped":1199,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:16:04.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-21a69859-8a84-48d3-8a04-eaedab56e716 STEP: Creating a pod to test consume secrets Feb 20 00:16:04.804: INFO: Waiting up to 5m0s for pod "pod-secrets-a5f3dd1e-c512-4388-993a-c4620f31dd5d" in namespace "secrets-4119" to be "success or failure" Feb 20 00:16:04.809: INFO: Pod "pod-secrets-a5f3dd1e-c512-4388-993a-c4620f31dd5d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.749766ms Feb 20 00:16:06.825: INFO: Pod "pod-secrets-a5f3dd1e-c512-4388-993a-c4620f31dd5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020213251s Feb 20 00:16:08.835: INFO: Pod "pod-secrets-a5f3dd1e-c512-4388-993a-c4620f31dd5d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030537276s Feb 20 00:16:10.843: INFO: Pod "pod-secrets-a5f3dd1e-c512-4388-993a-c4620f31dd5d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038020687s Feb 20 00:16:12.855: INFO: Pod "pod-secrets-a5f3dd1e-c512-4388-993a-c4620f31dd5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050902985s STEP: Saw pod success Feb 20 00:16:12.856: INFO: Pod "pod-secrets-a5f3dd1e-c512-4388-993a-c4620f31dd5d" satisfied condition "success or failure" Feb 20 00:16:12.862: INFO: Trying to get logs from node jerma-node pod pod-secrets-a5f3dd1e-c512-4388-993a-c4620f31dd5d container secret-volume-test: STEP: delete the pod Feb 20 00:16:12.918: INFO: Waiting for pod pod-secrets-a5f3dd1e-c512-4388-993a-c4620f31dd5d to disappear Feb 20 00:16:12.925: INFO: Pod pod-secrets-a5f3dd1e-c512-4388-993a-c4620f31dd5d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:16:12.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4119" for this suite. STEP: Destroying namespace "secret-namespace-9860" for this suite. • [SLOW TEST:8.519 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":280,"completed":82,"skipped":1204,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:16:13.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 20 00:16:13.386: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3baf82cc-d608-4be5-8d5c-1d1c43b1068b" in namespace "downward-api-497" to be "success or failure" Feb 20 00:16:13.390: INFO: Pod "downwardapi-volume-3baf82cc-d608-4be5-8d5c-1d1c43b1068b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.423188ms Feb 20 00:16:15.397: INFO: Pod "downwardapi-volume-3baf82cc-d608-4be5-8d5c-1d1c43b1068b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010893604s Feb 20 00:16:17.406: INFO: Pod "downwardapi-volume-3baf82cc-d608-4be5-8d5c-1d1c43b1068b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0205824s Feb 20 00:16:19.415: INFO: Pod "downwardapi-volume-3baf82cc-d608-4be5-8d5c-1d1c43b1068b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029338417s Feb 20 00:16:21.424: INFO: Pod "downwardapi-volume-3baf82cc-d608-4be5-8d5c-1d1c43b1068b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.037999457s STEP: Saw pod success Feb 20 00:16:21.424: INFO: Pod "downwardapi-volume-3baf82cc-d608-4be5-8d5c-1d1c43b1068b" satisfied condition "success or failure" Feb 20 00:16:21.432: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-3baf82cc-d608-4be5-8d5c-1d1c43b1068b container client-container: STEP: delete the pod Feb 20 00:16:21.615: INFO: Waiting for pod downwardapi-volume-3baf82cc-d608-4be5-8d5c-1d1c43b1068b to disappear Feb 20 00:16:21.622: INFO: Pod downwardapi-volume-3baf82cc-d608-4be5-8d5c-1d1c43b1068b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:16:21.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-497" for this suite. • [SLOW TEST:8.561 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":83,"skipped":1211,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:16:21.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 20 00:16:21.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8976' Feb 20 00:16:22.336: INFO: stderr: "" Feb 20 00:16:22.336: INFO: stdout: "replicationcontroller/agnhost-master created\n" Feb 20 00:16:22.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8976' Feb 20 00:16:22.917: INFO: stderr: "" Feb 20 00:16:22.917: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Feb 20 00:16:23.934: INFO: Selector matched 1 pods for map[app:agnhost] Feb 20 00:16:23.934: INFO: Found 0 / 1 Feb 20 00:16:24.926: INFO: Selector matched 1 pods for map[app:agnhost] Feb 20 00:16:24.926: INFO: Found 0 / 1 Feb 20 00:16:25.928: INFO: Selector matched 1 pods for map[app:agnhost] Feb 20 00:16:25.929: INFO: Found 0 / 1 Feb 20 00:16:26.974: INFO: Selector matched 1 pods for map[app:agnhost] Feb 20 00:16:26.974: INFO: Found 0 / 1 Feb 20 00:16:27.928: INFO: Selector matched 1 pods for map[app:agnhost] Feb 20 00:16:27.928: INFO: Found 0 / 1 Feb 20 00:16:28.925: INFO: Selector matched 1 pods for map[app:agnhost] Feb 20 00:16:28.925: INFO: Found 0 / 1 Feb 20 00:16:29.927: INFO: Selector matched 1 pods for map[app:agnhost] Feb 20 00:16:29.927: INFO: Found 1 / 1 Feb 20 00:16:29.927: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 20 00:16:29.933: INFO: Selector matched 1 pods for map[app:agnhost] Feb 20 00:16:29.933: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 20 00:16:29.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-prxjv --namespace=kubectl-8976' Feb 20 00:16:30.173: INFO: stderr: "" Feb 20 00:16:30.173: INFO: stdout: "Name: agnhost-master-prxjv\nNamespace: kubectl-8976\nPriority: 0\nNode: jerma-node/10.96.2.250\nStart Time: Thu, 20 Feb 2020 00:16:22 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nIPs:\n IP: 10.44.0.1\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: docker://16cc2c3e0be32bc9066c70e53f644e3f479f8c16dc86d23582287ce943361cb7\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 20 Feb 2020 00:16:29 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-s57sc (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-s57sc:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-s57sc\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-8976/agnhost-master-prxjv to jerma-node\n Normal Pulled 5s kubelet, jerma-node Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 2s kubelet, jerma-node Created container agnhost-master\n Normal Started 1s kubelet, jerma-node Started container agnhost-master\n" Feb 20 00:16:30.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-8976' Feb 20 00:16:30.305: INFO: stderr: "" Feb 20 00:16:30.305: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-8976\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 8s replication-controller Created pod: agnhost-master-prxjv\n" Feb 20 00:16:30.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-8976' Feb 20 00:16:30.466: INFO: stderr: "" Feb 20 00:16:30.467: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-8976\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.175.222\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.44.0.1:6379\nSession Affinity: None\nEvents: \n" Feb 20 00:16:30.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node' Feb 20 00:16:30.629: INFO: stderr: "" Feb 20 00:16:30.630: INFO: stdout: "Name: jerma-node\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-node\n kubernetes.io/os=linux\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 04 Jan 2020 11:59:52 +0000\nTaints: \nUnschedulable: false\nLease:\n HolderIdentity: jerma-node\n AcquireTime: \n RenewTime: Thu, 20 Feb 2020 00:16:30 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sat, 04 Jan 2020 12:00:49 +0000 Sat, 04 Jan 2020 12:00:49 +0000 WeaveIsUp Weave pod has set this\n MemoryPressure False Thu, 20 Feb 2020 00:15:09 +0000 Sat, 04 Jan 2020 11:59:52 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 20 Feb 2020 00:15:09 +0000 Sat, 04 Jan 2020 11:59:52 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 20 Feb 2020 00:15:09 +0000 Sat, 04 Jan 2020 11:59:52 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 20 Feb 2020 00:15:09 +0000 Sat, 04 Jan 2020 12:00:52 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.96.2.250\n Hostname: jerma-node\nCapacity:\n cpu: 4\n ephemeral-storage: 20145724Ki\n hugepages-2Mi: 0\n memory: 4039076Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 18566299208\n hugepages-2Mi: 0\n memory: 3936676Ki\n pods: 110\nSystem Info:\n Machine ID: bdc16344252549dd902c3a5d68b22f41\n System UUID: BDC16344-2525-49DD-902C-3A5D68B22F41\n Boot ID: eec61fc4-8bf6-487f-8f93-ea9731fe757a\n Kernel Version: 4.15.0-52-generic\n OS Image: Ubuntu 18.04.2 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.9.7\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nNon-terminated Pods: (3 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system kube-proxy-dsf66 0 (0%) 0 (0%) 0 (0%) 0 (0%) 46d\n kube-system weave-net-kz8lv 20m (0%) 0 (0%) 0 (0%) 0 (0%) 46d\n kubectl-8976 agnhost-master-prxjv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 20m (0%) 0 (0%)\n memory 0 (0%) 0 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Feb 20 00:16:30.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-8976' Feb 20 00:16:30.744: INFO: stderr: "" Feb 20 00:16:30.744: INFO: stdout: "Name: kubectl-8976\nLabels: e2e-framework=kubectl\n e2e-run=321171ef-a53d-4d69-8048-69a97ebb2fc5\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 20 00:16:30.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8976" for this suite. • [SLOW TEST:9.124 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1156 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":280,"completed":84,"skipped":1219,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 20 00:16:30.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 20 00:16:30.873: INFO: (0) /api/v1/nodes/jerma-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 8.40833ms)
Feb 20 00:16:30.878: INFO: (1) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.418748ms)
Feb 20 00:16:30.881: INFO: (2) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.498927ms)
Feb 20 00:16:30.886: INFO: (3) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.422302ms)
Feb 20 00:16:30.889: INFO: (4) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.57518ms)
Feb 20 00:16:30.893: INFO: (5) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.945184ms)
Feb 20 00:16:30.897: INFO: (6) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.15372ms)
Feb 20 00:16:30.900: INFO: (7) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.40579ms)
Feb 20 00:16:30.904: INFO: (8) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.81466ms)
Feb 20 00:16:30.909: INFO: (9) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.633823ms)
Feb 20 00:16:30.913: INFO: (10) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.36627ms)
Feb 20 00:16:30.917: INFO: (11) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.599004ms)
Feb 20 00:16:30.947: INFO: (12) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 30.108105ms)
Feb 20 00:16:30.954: INFO: (13) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.662252ms)
Feb 20 00:16:30.958: INFO: (14) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.167765ms)
Feb 20 00:16:30.963: INFO: (15) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.634844ms)
Feb 20 00:16:30.968: INFO: (16) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.068477ms)
Feb 20 00:16:30.973: INFO: (17) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.111501ms)
Feb 20 00:16:30.977: INFO: (18) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.232252ms)
Feb 20 00:16:30.982: INFO: (19) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.559986ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:16:30.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4480" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":280,"completed":85,"skipped":1236,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:16:30.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4893.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4893.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4893.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4893.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4893.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4893.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 20 00:16:45.267: INFO: DNS probes using dns-4893/dns-test-42514409-ca79-472f-919c-194644dbd9f9 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:16:45.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4893" for this suite.

• [SLOW TEST:14.507 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":280,"completed":86,"skipped":1241,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:16:45.504: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-configmap-fxtn
STEP: Creating a pod to test atomic-volume-subpath
Feb 20 00:16:45.704: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-fxtn" in namespace "subpath-7205" to be "success or failure"
Feb 20 00:16:45.756: INFO: Pod "pod-subpath-test-configmap-fxtn": Phase="Pending", Reason="", readiness=false. Elapsed: 52.637481ms
Feb 20 00:16:47.766: INFO: Pod "pod-subpath-test-configmap-fxtn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061976093s
Feb 20 00:16:49.773: INFO: Pod "pod-subpath-test-configmap-fxtn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06926701s
Feb 20 00:16:51.780: INFO: Pod "pod-subpath-test-configmap-fxtn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076515186s
Feb 20 00:16:53.790: INFO: Pod "pod-subpath-test-configmap-fxtn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.086191216s
Feb 20 00:16:55.797: INFO: Pod "pod-subpath-test-configmap-fxtn": Phase="Running", Reason="", readiness=true. Elapsed: 10.093263923s
Feb 20 00:16:57.809: INFO: Pod "pod-subpath-test-configmap-fxtn": Phase="Running", Reason="", readiness=true. Elapsed: 12.104640357s
Feb 20 00:16:59.818: INFO: Pod "pod-subpath-test-configmap-fxtn": Phase="Running", Reason="", readiness=true. Elapsed: 14.113879662s
Feb 20 00:17:01.825: INFO: Pod "pod-subpath-test-configmap-fxtn": Phase="Running", Reason="", readiness=true. Elapsed: 16.120700225s
Feb 20 00:17:03.837: INFO: Pod "pod-subpath-test-configmap-fxtn": Phase="Running", Reason="", readiness=true. Elapsed: 18.132862732s
Feb 20 00:17:05.849: INFO: Pod "pod-subpath-test-configmap-fxtn": Phase="Running", Reason="", readiness=true. Elapsed: 20.144690466s
Feb 20 00:17:07.859: INFO: Pod "pod-subpath-test-configmap-fxtn": Phase="Running", Reason="", readiness=true. Elapsed: 22.155259385s
Feb 20 00:17:09.868: INFO: Pod "pod-subpath-test-configmap-fxtn": Phase="Running", Reason="", readiness=true. Elapsed: 24.164118477s
Feb 20 00:17:11.876: INFO: Pod "pod-subpath-test-configmap-fxtn": Phase="Running", Reason="", readiness=true. Elapsed: 26.172230227s
Feb 20 00:17:13.888: INFO: Pod "pod-subpath-test-configmap-fxtn": Phase="Running", Reason="", readiness=true. Elapsed: 28.183887381s
Feb 20 00:17:15.938: INFO: Pod "pod-subpath-test-configmap-fxtn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.234233833s
STEP: Saw pod success
Feb 20 00:17:15.938: INFO: Pod "pod-subpath-test-configmap-fxtn" satisfied condition "success or failure"
Feb 20 00:17:15.942: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-fxtn container test-container-subpath-configmap-fxtn: 
STEP: delete the pod
Feb 20 00:17:15.974: INFO: Waiting for pod pod-subpath-test-configmap-fxtn to disappear
Feb 20 00:17:15.980: INFO: Pod pod-subpath-test-configmap-fxtn no longer exists
STEP: Deleting pod pod-subpath-test-configmap-fxtn
Feb 20 00:17:15.980: INFO: Deleting pod "pod-subpath-test-configmap-fxtn" in namespace "subpath-7205"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:17:15.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7205" for this suite.

• [SLOW TEST:30.490 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":280,"completed":87,"skipped":1246,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:17:15.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test hostPath mode
Feb 20 00:17:16.109: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9651" to be "success or failure"
Feb 20 00:17:16.123: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.375465ms
Feb 20 00:17:18.133: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0238124s
Feb 20 00:17:20.147: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038417574s
Feb 20 00:17:22.158: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04862293s
Feb 20 00:17:24.166: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057261374s
Feb 20 00:17:26.173: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.063756191s
Feb 20 00:17:28.181: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.071924766s
STEP: Saw pod success
Feb 20 00:17:28.181: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb 20 00:17:28.184: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb 20 00:17:28.319: INFO: Waiting for pod pod-host-path-test to disappear
Feb 20 00:17:28.323: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:17:28.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-9651" for this suite.

• [SLOW TEST:12.341 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":88,"skipped":1272,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:17:28.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 20 00:17:29.508: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 20 00:17:31.527: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754649, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754649, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754649, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754649, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:17:33.533: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754649, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754649, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754649, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754649, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:17:35.535: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754649, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754649, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754649, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754649, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:17:37.538: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754649, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754649, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754649, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754649, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 20 00:17:40.567: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:17:40.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-655" for this suite.
STEP: Destroying namespace "webhook-655-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:12.769 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":280,"completed":89,"skipped":1277,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:17:41.106: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-5494
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 20 00:17:41.188: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Feb 20 00:17:41.273: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 20 00:17:43.376: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 20 00:17:45.280: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 20 00:17:47.467: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 20 00:17:49.818: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 20 00:17:51.280: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 20 00:17:53.282: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 20 00:17:55.279: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 20 00:17:57.280: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 20 00:17:59.283: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 20 00:18:01.279: INFO: The status of Pod netserver-0 is Running (Ready = true)
Feb 20 00:18:01.284: INFO: The status of Pod netserver-1 is Running (Ready = false)
Feb 20 00:18:03.294: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Feb 20 00:18:11.369: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-5494 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 20 00:18:11.369: INFO: >>> kubeConfig: /root/.kube/config
I0220 00:18:11.430188       9 log.go:172] (0xc002b3f8c0) (0xc000b3f040) Create stream
I0220 00:18:11.430975       9 log.go:172] (0xc002b3f8c0) (0xc000b3f040) Stream added, broadcasting: 1
I0220 00:18:11.437531       9 log.go:172] (0xc002b3f8c0) Reply frame received for 1
I0220 00:18:11.437579       9 log.go:172] (0xc002b3f8c0) (0xc000a44820) Create stream
I0220 00:18:11.437592       9 log.go:172] (0xc002b3f8c0) (0xc000a44820) Stream added, broadcasting: 3
I0220 00:18:11.438992       9 log.go:172] (0xc002b3f8c0) Reply frame received for 3
I0220 00:18:11.439036       9 log.go:172] (0xc002b3f8c0) (0xc001ac60a0) Create stream
I0220 00:18:11.439053       9 log.go:172] (0xc002b3f8c0) (0xc001ac60a0) Stream added, broadcasting: 5
I0220 00:18:11.440548       9 log.go:172] (0xc002b3f8c0) Reply frame received for 5
I0220 00:18:11.532321       9 log.go:172] (0xc002b3f8c0) Data frame received for 3
I0220 00:18:11.532412       9 log.go:172] (0xc000a44820) (3) Data frame handling
I0220 00:18:11.532442       9 log.go:172] (0xc000a44820) (3) Data frame sent
I0220 00:18:11.627669       9 log.go:172] (0xc002b3f8c0) Data frame received for 1
I0220 00:18:11.627801       9 log.go:172] (0xc000b3f040) (1) Data frame handling
I0220 00:18:11.627844       9 log.go:172] (0xc000b3f040) (1) Data frame sent
I0220 00:18:11.628494       9 log.go:172] (0xc002b3f8c0) (0xc000b3f040) Stream removed, broadcasting: 1
I0220 00:18:11.628729       9 log.go:172] (0xc002b3f8c0) (0xc000a44820) Stream removed, broadcasting: 3
I0220 00:18:11.628839       9 log.go:172] (0xc002b3f8c0) (0xc001ac60a0) Stream removed, broadcasting: 5
I0220 00:18:11.628882       9 log.go:172] (0xc002b3f8c0) Go away received
I0220 00:18:11.628975       9 log.go:172] (0xc002b3f8c0) (0xc000b3f040) Stream removed, broadcasting: 1
I0220 00:18:11.629000       9 log.go:172] (0xc002b3f8c0) (0xc000a44820) Stream removed, broadcasting: 3
I0220 00:18:11.629021       9 log.go:172] (0xc002b3f8c0) (0xc001ac60a0) Stream removed, broadcasting: 5
Feb 20 00:18:11.629: INFO: Waiting for responses: map[]
Feb 20 00:18:11.637: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-5494 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 20 00:18:11.637: INFO: >>> kubeConfig: /root/.kube/config
I0220 00:18:11.692766       9 log.go:172] (0xc00287d550) (0xc001586780) Create stream
I0220 00:18:11.692889       9 log.go:172] (0xc00287d550) (0xc001586780) Stream added, broadcasting: 1
I0220 00:18:11.696686       9 log.go:172] (0xc00287d550) Reply frame received for 1
I0220 00:18:11.696724       9 log.go:172] (0xc00287d550) (0xc000b3f360) Create stream
I0220 00:18:11.696740       9 log.go:172] (0xc00287d550) (0xc000b3f360) Stream added, broadcasting: 3
I0220 00:18:11.698182       9 log.go:172] (0xc00287d550) Reply frame received for 3
I0220 00:18:11.698203       9 log.go:172] (0xc00287d550) (0xc000b3f400) Create stream
I0220 00:18:11.698215       9 log.go:172] (0xc00287d550) (0xc000b3f400) Stream added, broadcasting: 5
I0220 00:18:11.699652       9 log.go:172] (0xc00287d550) Reply frame received for 5
I0220 00:18:11.776697       9 log.go:172] (0xc00287d550) Data frame received for 3
I0220 00:18:11.776842       9 log.go:172] (0xc000b3f360) (3) Data frame handling
I0220 00:18:11.776871       9 log.go:172] (0xc000b3f360) (3) Data frame sent
I0220 00:18:11.901757       9 log.go:172] (0xc00287d550) Data frame received for 1
I0220 00:18:11.901877       9 log.go:172] (0xc00287d550) (0xc000b3f360) Stream removed, broadcasting: 3
I0220 00:18:11.901936       9 log.go:172] (0xc001586780) (1) Data frame handling
I0220 00:18:11.901972       9 log.go:172] (0xc001586780) (1) Data frame sent
I0220 00:18:11.901999       9 log.go:172] (0xc00287d550) (0xc001586780) Stream removed, broadcasting: 1
I0220 00:18:11.902150       9 log.go:172] (0xc00287d550) (0xc000b3f400) Stream removed, broadcasting: 5
I0220 00:18:11.902250       9 log.go:172] (0xc00287d550) Go away received
I0220 00:18:11.902292       9 log.go:172] (0xc00287d550) (0xc001586780) Stream removed, broadcasting: 1
I0220 00:18:11.902318       9 log.go:172] (0xc00287d550) (0xc000b3f360) Stream removed, broadcasting: 3
I0220 00:18:11.902329       9 log.go:172] (0xc00287d550) (0xc000b3f400) Stream removed, broadcasting: 5
Feb 20 00:18:11.902: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:18:11.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5494" for this suite.

• [SLOW TEST:30.813 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":280,"completed":90,"skipped":1279,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:18:11.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb 20 00:18:12.224: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:18:14.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8746" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":280,"completed":91,"skipped":1284,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:18:14.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1735
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 20 00:18:14.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-5497'
Feb 20 00:18:14.766: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 20 00:18:14.766: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1740
Feb 20 00:18:16.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5497'
Feb 20 00:18:17.179: INFO: stderr: ""
Feb 20 00:18:17.179: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:18:17.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5497" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image  [Conformance]","total":280,"completed":92,"skipped":1294,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:18:17.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 20 00:18:17.963: INFO: Number of nodes with available pods: 0
Feb 20 00:18:17.964: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:19.879: INFO: Number of nodes with available pods: 0
Feb 20 00:18:19.879: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:21.453: INFO: Number of nodes with available pods: 0
Feb 20 00:18:21.454: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:22.264: INFO: Number of nodes with available pods: 0
Feb 20 00:18:22.264: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:23.103: INFO: Number of nodes with available pods: 0
Feb 20 00:18:23.104: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:25.840: INFO: Number of nodes with available pods: 0
Feb 20 00:18:25.840: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:26.075: INFO: Number of nodes with available pods: 0
Feb 20 00:18:26.076: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:26.973: INFO: Number of nodes with available pods: 0
Feb 20 00:18:26.973: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:28.151: INFO: Number of nodes with available pods: 0
Feb 20 00:18:28.151: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:29.222: INFO: Number of nodes with available pods: 0
Feb 20 00:18:29.222: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:29.974: INFO: Number of nodes with available pods: 0
Feb 20 00:18:29.974: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:31.190: INFO: Number of nodes with available pods: 0
Feb 20 00:18:31.190: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:31.989: INFO: Number of nodes with available pods: 0
Feb 20 00:18:31.989: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:32.994: INFO: Number of nodes with available pods: 1
Feb 20 00:18:32.994: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:33.979: INFO: Number of nodes with available pods: 1
Feb 20 00:18:33.979: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:34.974: INFO: Number of nodes with available pods: 1
Feb 20 00:18:34.974: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:35.978: INFO: Number of nodes with available pods: 1
Feb 20 00:18:35.978: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:37.241: INFO: Number of nodes with available pods: 1
Feb 20 00:18:37.241: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:37.978: INFO: Number of nodes with available pods: 1
Feb 20 00:18:37.978: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:38.976: INFO: Number of nodes with available pods: 2
Feb 20 00:18:38.976: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb 20 00:18:39.013: INFO: Number of nodes with available pods: 1
Feb 20 00:18:39.013: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:40.031: INFO: Number of nodes with available pods: 1
Feb 20 00:18:40.031: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:41.058: INFO: Number of nodes with available pods: 1
Feb 20 00:18:41.059: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:42.038: INFO: Number of nodes with available pods: 1
Feb 20 00:18:42.039: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:43.029: INFO: Number of nodes with available pods: 1
Feb 20 00:18:43.029: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:44.030: INFO: Number of nodes with available pods: 1
Feb 20 00:18:44.030: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:45.025: INFO: Number of nodes with available pods: 1
Feb 20 00:18:45.025: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:46.074: INFO: Number of nodes with available pods: 1
Feb 20 00:18:46.075: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:47.023: INFO: Number of nodes with available pods: 1
Feb 20 00:18:47.023: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:48.024: INFO: Number of nodes with available pods: 1
Feb 20 00:18:48.024: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:49.039: INFO: Number of nodes with available pods: 1
Feb 20 00:18:49.039: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:50.031: INFO: Number of nodes with available pods: 1
Feb 20 00:18:50.032: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:51.031: INFO: Number of nodes with available pods: 1
Feb 20 00:18:51.031: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:52.033: INFO: Number of nodes with available pods: 1
Feb 20 00:18:52.033: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:53.027: INFO: Number of nodes with available pods: 1
Feb 20 00:18:53.028: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:54.125: INFO: Number of nodes with available pods: 1
Feb 20 00:18:54.125: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:55.043: INFO: Number of nodes with available pods: 1
Feb 20 00:18:55.043: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:56.029: INFO: Number of nodes with available pods: 1
Feb 20 00:18:56.030: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:57.024: INFO: Number of nodes with available pods: 1
Feb 20 00:18:57.025: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:58.095: INFO: Number of nodes with available pods: 1
Feb 20 00:18:58.095: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:18:59.022: INFO: Number of nodes with available pods: 2
Feb 20 00:18:59.022: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3678, will wait for the garbage collector to delete the pods
Feb 20 00:18:59.084: INFO: Deleting DaemonSet.extensions daemon-set took: 7.552246ms
Feb 20 00:18:59.485: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.52558ms
Feb 20 00:19:13.200: INFO: Number of nodes with available pods: 0
Feb 20 00:19:13.200: INFO: Number of running nodes: 0, number of available pods: 0
Feb 20 00:19:13.205: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3678/daemonsets","resourceVersion":"9500475"},"items":null}

Feb 20 00:19:13.208: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3678/pods","resourceVersion":"9500475"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:19:13.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3678" for this suite.

• [SLOW TEST:55.897 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":280,"completed":93,"skipped":1313,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:19:13.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:19:13.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-5395" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":280,"completed":94,"skipped":1363,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:19:13.494: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 20 00:19:13.560: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fac42e90-a885-4a39-bf6e-675640059b29" in namespace "projected-9481" to be "success or failure"
Feb 20 00:19:13.565: INFO: Pod "downwardapi-volume-fac42e90-a885-4a39-bf6e-675640059b29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.835869ms
Feb 20 00:19:15.573: INFO: Pod "downwardapi-volume-fac42e90-a885-4a39-bf6e-675640059b29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013141635s
Feb 20 00:19:17.581: INFO: Pod "downwardapi-volume-fac42e90-a885-4a39-bf6e-675640059b29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021320018s
Feb 20 00:19:19.588: INFO: Pod "downwardapi-volume-fac42e90-a885-4a39-bf6e-675640059b29": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028084556s
Feb 20 00:19:21.596: INFO: Pod "downwardapi-volume-fac42e90-a885-4a39-bf6e-675640059b29": Phase="Pending", Reason="", readiness=false. Elapsed: 8.036288346s
Feb 20 00:19:23.604: INFO: Pod "downwardapi-volume-fac42e90-a885-4a39-bf6e-675640059b29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.044559893s
STEP: Saw pod success
Feb 20 00:19:23.605: INFO: Pod "downwardapi-volume-fac42e90-a885-4a39-bf6e-675640059b29" satisfied condition "success or failure"
Feb 20 00:19:23.610: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-fac42e90-a885-4a39-bf6e-675640059b29 container client-container: 
STEP: delete the pod
Feb 20 00:19:23.702: INFO: Waiting for pod downwardapi-volume-fac42e90-a885-4a39-bf6e-675640059b29 to disappear
Feb 20 00:19:23.711: INFO: Pod downwardapi-volume-fac42e90-a885-4a39-bf6e-675640059b29 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:19:23.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9481" for this suite.

• [SLOW TEST:10.235 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":280,"completed":95,"skipped":1426,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:19:23.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb 20 00:19:23.847: INFO: Waiting up to 5m0s for pod "pod-a1c0a205-db69-403e-a656-8c9527c4b157" in namespace "emptydir-9421" to be "success or failure"
Feb 20 00:19:23.857: INFO: Pod "pod-a1c0a205-db69-403e-a656-8c9527c4b157": Phase="Pending", Reason="", readiness=false. Elapsed: 9.844293ms
Feb 20 00:19:25.865: INFO: Pod "pod-a1c0a205-db69-403e-a656-8c9527c4b157": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018276253s
Feb 20 00:19:27.873: INFO: Pod "pod-a1c0a205-db69-403e-a656-8c9527c4b157": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025972221s
Feb 20 00:19:29.901: INFO: Pod "pod-a1c0a205-db69-403e-a656-8c9527c4b157": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05421786s
Feb 20 00:19:31.913: INFO: Pod "pod-a1c0a205-db69-403e-a656-8c9527c4b157": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066273396s
STEP: Saw pod success
Feb 20 00:19:31.913: INFO: Pod "pod-a1c0a205-db69-403e-a656-8c9527c4b157" satisfied condition "success or failure"
Feb 20 00:19:31.917: INFO: Trying to get logs from node jerma-node pod pod-a1c0a205-db69-403e-a656-8c9527c4b157 container test-container: 
STEP: delete the pod
Feb 20 00:19:32.025: INFO: Waiting for pod pod-a1c0a205-db69-403e-a656-8c9527c4b157 to disappear
Feb 20 00:19:32.084: INFO: Pod pod-a1c0a205-db69-403e-a656-8c9527c4b157 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:19:32.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9421" for this suite.

• [SLOW TEST:8.364 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":96,"skipped":1432,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:19:32.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-map-23d7c567-009a-400b-85e8-f8cdc3b0a2dc
STEP: Creating a pod to test consume secrets
Feb 20 00:19:32.385: INFO: Waiting up to 5m0s for pod "pod-secrets-801baec0-b0c4-47bd-ad8f-38156f8548b1" in namespace "secrets-412" to be "success or failure"
Feb 20 00:19:32.413: INFO: Pod "pod-secrets-801baec0-b0c4-47bd-ad8f-38156f8548b1": Phase="Pending", Reason="", readiness=false. Elapsed: 27.218199ms
Feb 20 00:19:34.474: INFO: Pod "pod-secrets-801baec0-b0c4-47bd-ad8f-38156f8548b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088253712s
Feb 20 00:19:36.484: INFO: Pod "pod-secrets-801baec0-b0c4-47bd-ad8f-38156f8548b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09812467s
Feb 20 00:19:38.496: INFO: Pod "pod-secrets-801baec0-b0c4-47bd-ad8f-38156f8548b1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11092731s
Feb 20 00:19:40.507: INFO: Pod "pod-secrets-801baec0-b0c4-47bd-ad8f-38156f8548b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.121607473s
STEP: Saw pod success
Feb 20 00:19:40.507: INFO: Pod "pod-secrets-801baec0-b0c4-47bd-ad8f-38156f8548b1" satisfied condition "success or failure"
Feb 20 00:19:40.511: INFO: Trying to get logs from node jerma-node pod pod-secrets-801baec0-b0c4-47bd-ad8f-38156f8548b1 container secret-volume-test: 
STEP: delete the pod
Feb 20 00:19:40.591: INFO: Waiting for pod pod-secrets-801baec0-b0c4-47bd-ad8f-38156f8548b1 to disappear
Feb 20 00:19:40.600: INFO: Pod pod-secrets-801baec0-b0c4-47bd-ad8f-38156f8548b1 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:19:40.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-412" for this suite.

• [SLOW TEST:8.518 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":97,"skipped":1447,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:19:40.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Feb 20 00:19:41.035: INFO: Created pod &Pod{ObjectMeta:{dns-1852  dns-1852 /api/v1/namespaces/dns-1852/pods/dns-1852 54696bbe-d7d3-445c-9fd0-8fa973521c39 9500642 0 2020-02-20 00:19:40 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2vk97,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2vk97,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2vk97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 20 00:19:41.226: INFO: The status of Pod dns-1852 is Pending, waiting for it to be Running (with Ready = true)
Feb 20 00:19:43.231: INFO: The status of Pod dns-1852 is Pending, waiting for it to be Running (with Ready = true)
Feb 20 00:19:45.273: INFO: The status of Pod dns-1852 is Pending, waiting for it to be Running (with Ready = true)
Feb 20 00:19:47.342: INFO: The status of Pod dns-1852 is Pending, waiting for it to be Running (with Ready = true)
Feb 20 00:19:49.268: INFO: The status of Pod dns-1852 is Pending, waiting for it to be Running (with Ready = true)
Feb 20 00:19:51.232: INFO: The status of Pod dns-1852 is Running (Ready = true)
STEP: Verifying customized DNS suffix list is configured on pod...
Feb 20 00:19:51.232: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-1852 PodName:dns-1852 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 20 00:19:51.232: INFO: >>> kubeConfig: /root/.kube/config
I0220 00:19:51.295439       9 log.go:172] (0xc002b3e0b0) (0xc001072500) Create stream
I0220 00:19:51.295654       9 log.go:172] (0xc002b3e0b0) (0xc001072500) Stream added, broadcasting: 1
I0220 00:19:51.301173       9 log.go:172] (0xc002b3e0b0) Reply frame received for 1
I0220 00:19:51.301225       9 log.go:172] (0xc002b3e0b0) (0xc000f3a1e0) Create stream
I0220 00:19:51.301240       9 log.go:172] (0xc002b3e0b0) (0xc000f3a1e0) Stream added, broadcasting: 3
I0220 00:19:51.306403       9 log.go:172] (0xc002b3e0b0) Reply frame received for 3
I0220 00:19:51.306678       9 log.go:172] (0xc002b3e0b0) (0xc001128500) Create stream
I0220 00:19:51.306722       9 log.go:172] (0xc002b3e0b0) (0xc001128500) Stream added, broadcasting: 5
I0220 00:19:51.310177       9 log.go:172] (0xc002b3e0b0) Reply frame received for 5
I0220 00:19:51.418685       9 log.go:172] (0xc002b3e0b0) Data frame received for 3
I0220 00:19:51.418845       9 log.go:172] (0xc000f3a1e0) (3) Data frame handling
I0220 00:19:51.418871       9 log.go:172] (0xc000f3a1e0) (3) Data frame sent
I0220 00:19:51.493919       9 log.go:172] (0xc002b3e0b0) Data frame received for 1
I0220 00:19:51.494151       9 log.go:172] (0xc002b3e0b0) (0xc001128500) Stream removed, broadcasting: 5
I0220 00:19:51.494260       9 log.go:172] (0xc001072500) (1) Data frame handling
I0220 00:19:51.494295       9 log.go:172] (0xc001072500) (1) Data frame sent
I0220 00:19:51.494358       9 log.go:172] (0xc002b3e0b0) (0xc000f3a1e0) Stream removed, broadcasting: 3
I0220 00:19:51.494406       9 log.go:172] (0xc002b3e0b0) (0xc001072500) Stream removed, broadcasting: 1
I0220 00:19:51.494436       9 log.go:172] (0xc002b3e0b0) Go away received
I0220 00:19:51.494759       9 log.go:172] (0xc002b3e0b0) (0xc001072500) Stream removed, broadcasting: 1
I0220 00:19:51.494792       9 log.go:172] (0xc002b3e0b0) (0xc000f3a1e0) Stream removed, broadcasting: 3
I0220 00:19:51.494815       9 log.go:172] (0xc002b3e0b0) (0xc001128500) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Feb 20 00:19:51.494: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-1852 PodName:dns-1852 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 20 00:19:51.495: INFO: >>> kubeConfig: /root/.kube/config
I0220 00:19:51.555865       9 log.go:172] (0xc001508370) (0xc000b868c0) Create stream
I0220 00:19:51.555959       9 log.go:172] (0xc001508370) (0xc000b868c0) Stream added, broadcasting: 1
I0220 00:19:51.560021       9 log.go:172] (0xc001508370) Reply frame received for 1
I0220 00:19:51.560045       9 log.go:172] (0xc001508370) (0xc000b86a00) Create stream
I0220 00:19:51.560052       9 log.go:172] (0xc001508370) (0xc000b86a00) Stream added, broadcasting: 3
I0220 00:19:51.561038       9 log.go:172] (0xc001508370) Reply frame received for 3
I0220 00:19:51.561056       9 log.go:172] (0xc001508370) (0xc0011285a0) Create stream
I0220 00:19:51.561063       9 log.go:172] (0xc001508370) (0xc0011285a0) Stream added, broadcasting: 5
I0220 00:19:51.562175       9 log.go:172] (0xc001508370) Reply frame received for 5
I0220 00:19:51.625814       9 log.go:172] (0xc001508370) Data frame received for 3
I0220 00:19:51.625926       9 log.go:172] (0xc000b86a00) (3) Data frame handling
I0220 00:19:51.625954       9 log.go:172] (0xc000b86a00) (3) Data frame sent
I0220 00:19:51.687005       9 log.go:172] (0xc001508370) (0xc000b86a00) Stream removed, broadcasting: 3
I0220 00:19:51.687147       9 log.go:172] (0xc001508370) Data frame received for 1
I0220 00:19:51.687188       9 log.go:172] (0xc001508370) (0xc0011285a0) Stream removed, broadcasting: 5
I0220 00:19:51.687234       9 log.go:172] (0xc000b868c0) (1) Data frame handling
I0220 00:19:51.687256       9 log.go:172] (0xc000b868c0) (1) Data frame sent
I0220 00:19:51.687276       9 log.go:172] (0xc001508370) (0xc000b868c0) Stream removed, broadcasting: 1
I0220 00:19:51.687292       9 log.go:172] (0xc001508370) Go away received
I0220 00:19:51.687535       9 log.go:172] (0xc001508370) (0xc000b868c0) Stream removed, broadcasting: 1
I0220 00:19:51.687553       9 log.go:172] (0xc001508370) (0xc000b86a00) Stream removed, broadcasting: 3
I0220 00:19:51.687563       9 log.go:172] (0xc001508370) (0xc0011285a0) Stream removed, broadcasting: 5
Feb 20 00:19:51.687: INFO: Deleting pod dns-1852...
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:19:51.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1852" for this suite.

• [SLOW TEST:11.126 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":280,"completed":98,"skipped":1448,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:19:51.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 20 00:19:51.859: INFO: Waiting up to 5m0s for pod "pod-1e6a94c1-2f5f-44a0-a5d3-b76f6eafce9f" in namespace "emptydir-4977" to be "success or failure"
Feb 20 00:19:51.865: INFO: Pod "pod-1e6a94c1-2f5f-44a0-a5d3-b76f6eafce9f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.373105ms
Feb 20 00:19:53.875: INFO: Pod "pod-1e6a94c1-2f5f-44a0-a5d3-b76f6eafce9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01508785s
Feb 20 00:19:55.896: INFO: Pod "pod-1e6a94c1-2f5f-44a0-a5d3-b76f6eafce9f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036695636s
Feb 20 00:19:57.905: INFO: Pod "pod-1e6a94c1-2f5f-44a0-a5d3-b76f6eafce9f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045182361s
Feb 20 00:19:59.958: INFO: Pod "pod-1e6a94c1-2f5f-44a0-a5d3-b76f6eafce9f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.097989207s
Feb 20 00:20:01.964: INFO: Pod "pod-1e6a94c1-2f5f-44a0-a5d3-b76f6eafce9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.104345368s
STEP: Saw pod success
Feb 20 00:20:01.964: INFO: Pod "pod-1e6a94c1-2f5f-44a0-a5d3-b76f6eafce9f" satisfied condition "success or failure"
Feb 20 00:20:01.967: INFO: Trying to get logs from node jerma-node pod pod-1e6a94c1-2f5f-44a0-a5d3-b76f6eafce9f container test-container: 
STEP: delete the pod
Feb 20 00:20:02.078: INFO: Waiting for pod pod-1e6a94c1-2f5f-44a0-a5d3-b76f6eafce9f to disappear
Feb 20 00:20:02.087: INFO: Pod pod-1e6a94c1-2f5f-44a0-a5d3-b76f6eafce9f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:20:02.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4977" for this suite.

• [SLOW TEST:10.390 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":99,"skipped":1449,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:20:02.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap configmap-6852/configmap-test-6079fd4c-790a-4e86-a193-492ca45301d3
STEP: Creating a pod to test consume configMaps
Feb 20 00:20:02.279: INFO: Waiting up to 5m0s for pod "pod-configmaps-690642b8-187a-4bbf-9951-409306836653" in namespace "configmap-6852" to be "success or failure"
Feb 20 00:20:02.314: INFO: Pod "pod-configmaps-690642b8-187a-4bbf-9951-409306836653": Phase="Pending", Reason="", readiness=false. Elapsed: 33.985958ms
Feb 20 00:20:04.320: INFO: Pod "pod-configmaps-690642b8-187a-4bbf-9951-409306836653": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040088451s
Feb 20 00:20:06.352: INFO: Pod "pod-configmaps-690642b8-187a-4bbf-9951-409306836653": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072216349s
Feb 20 00:20:08.362: INFO: Pod "pod-configmaps-690642b8-187a-4bbf-9951-409306836653": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082248872s
Feb 20 00:20:10.913: INFO: Pod "pod-configmaps-690642b8-187a-4bbf-9951-409306836653": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.633181293s
STEP: Saw pod success
Feb 20 00:20:10.913: INFO: Pod "pod-configmaps-690642b8-187a-4bbf-9951-409306836653" satisfied condition "success or failure"
Feb 20 00:20:10.926: INFO: Trying to get logs from node jerma-node pod pod-configmaps-690642b8-187a-4bbf-9951-409306836653 container env-test: 
STEP: delete the pod
Feb 20 00:20:11.116: INFO: Waiting for pod pod-configmaps-690642b8-187a-4bbf-9951-409306836653 to disappear
Feb 20 00:20:11.127: INFO: Pod pod-configmaps-690642b8-187a-4bbf-9951-409306836653 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:20:11.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6852" for this suite.

• [SLOW TEST:9.006 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":280,"completed":100,"skipped":1473,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:20:11.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:20:22.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1821" for this suite.

• [SLOW TEST:11.355 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":280,"completed":101,"skipped":1479,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:20:22.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Feb 20 00:20:22.644: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 20 00:20:22.663: INFO: Waiting for terminating namespaces to be deleted...
Feb 20 00:20:22.666: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 20 00:20:22.675: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 20 00:20:22.675: INFO: 	Container weave ready: true, restart count 1
Feb 20 00:20:22.675: INFO: 	Container weave-npc ready: true, restart count 0
Feb 20 00:20:22.675: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 20 00:20:22.675: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 20 00:20:22.675: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 20 00:20:22.698: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 20 00:20:22.698: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 20 00:20:22.698: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 20 00:20:22.698: INFO: 	Container weave ready: true, restart count 0
Feb 20 00:20:22.698: INFO: 	Container weave-npc ready: true, restart count 0
Feb 20 00:20:22.698: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 20 00:20:22.698: INFO: 	Container kube-controller-manager ready: true, restart count 14
Feb 20 00:20:22.698: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 20 00:20:22.698: INFO: 	Container kube-scheduler ready: true, restart count 18
Feb 20 00:20:22.698: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 20 00:20:22.698: INFO: 	Container etcd ready: true, restart count 1
Feb 20 00:20:22.698: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 20 00:20:22.698: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 20 00:20:22.698: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 20 00:20:22.698: INFO: 	Container coredns ready: true, restart count 0
Feb 20 00:20:22.698: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 20 00:20:22.698: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f4f456cb37a6fb], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:20:23.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7809" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":280,"completed":102,"skipped":1487,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:20:23.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:20:32.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9558" for this suite.

• [SLOW TEST:8.302 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":280,"completed":103,"skipped":1563,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:20:32.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-787
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-787
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-787
Feb 20 00:20:32.333: INFO: Found 0 stateful pods, waiting for 1
Feb 20 00:20:42.342: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb 20 00:20:42.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-787 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 20 00:20:42.850: INFO: stderr: "I0220 00:20:42.504269    1599 log.go:172] (0xc000bd8f20) (0xc000606280) Create stream\nI0220 00:20:42.504456    1599 log.go:172] (0xc000bd8f20) (0xc000606280) Stream added, broadcasting: 1\nI0220 00:20:42.515273    1599 log.go:172] (0xc000bd8f20) Reply frame received for 1\nI0220 00:20:42.515334    1599 log.go:172] (0xc000bd8f20) (0xc000709a40) Create stream\nI0220 00:20:42.515345    1599 log.go:172] (0xc000bd8f20) (0xc000709a40) Stream added, broadcasting: 3\nI0220 00:20:42.516986    1599 log.go:172] (0xc000bd8f20) Reply frame received for 3\nI0220 00:20:42.517041    1599 log.go:172] (0xc000bd8f20) (0xc000709ae0) Create stream\nI0220 00:20:42.517055    1599 log.go:172] (0xc000bd8f20) (0xc000709ae0) Stream added, broadcasting: 5\nI0220 00:20:42.519188    1599 log.go:172] (0xc000bd8f20) Reply frame received for 5\nI0220 00:20:42.650490    1599 log.go:172] (0xc000bd8f20) Data frame received for 5\nI0220 00:20:42.650663    1599 log.go:172] (0xc000709ae0) (5) Data frame handling\nI0220 00:20:42.650736    1599 log.go:172] (0xc000709ae0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0220 00:20:42.695548    1599 log.go:172] (0xc000bd8f20) Data frame received for 3\nI0220 00:20:42.695977    1599 log.go:172] (0xc000709a40) (3) Data frame handling\nI0220 00:20:42.696006    1599 log.go:172] (0xc000709a40) (3) Data frame sent\nI0220 00:20:42.829839    1599 log.go:172] (0xc000bd8f20) Data frame received for 1\nI0220 00:20:42.830789    1599 log.go:172] (0xc000606280) (1) Data frame handling\nI0220 00:20:42.830906    1599 log.go:172] (0xc000606280) (1) Data frame sent\nI0220 00:20:42.832578    1599 log.go:172] (0xc000bd8f20) (0xc000606280) Stream removed, broadcasting: 1\nI0220 00:20:42.833608    1599 log.go:172] (0xc000bd8f20) (0xc000709a40) Stream removed, broadcasting: 3\nI0220 00:20:42.833649    1599 log.go:172] (0xc000bd8f20) (0xc000709ae0) Stream removed, broadcasting: 5\nI0220 00:20:42.833687    1599 log.go:172] (0xc000bd8f20) (0xc000606280) Stream removed, broadcasting: 1\nI0220 00:20:42.833705    1599 log.go:172] (0xc000bd8f20) (0xc000709a40) Stream removed, broadcasting: 3\nI0220 00:20:42.833721    1599 log.go:172] (0xc000bd8f20) (0xc000709ae0) Stream removed, broadcasting: 5\n"
Feb 20 00:20:42.850: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 20 00:20:42.850: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 20 00:20:42.862: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 20 00:20:52.871: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 20 00:20:52.871: INFO: Waiting for statefulset status.replicas updated to 0
Feb 20 00:20:52.988: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999998566s
Feb 20 00:20:54.003: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.904432359s
Feb 20 00:20:55.010: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.890022769s
Feb 20 00:20:56.017: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.88365378s
Feb 20 00:20:57.024: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.875995995s
Feb 20 00:20:58.032: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.868850377s
Feb 20 00:20:59.100: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.861189403s
Feb 20 00:21:00.108: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.792616702s
Feb 20 00:21:01.113: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.785309685s
Feb 20 00:21:02.121: INFO: Verifying statefulset ss doesn't scale past 1 for another 780.520567ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-787
Feb 20 00:21:03.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 00:21:03.739: INFO: stderr: "I0220 00:21:03.285390    1621 log.go:172] (0xc0005c40b0) (0xc000457540) Create stream\nI0220 00:21:03.285476    1621 log.go:172] (0xc0005c40b0) (0xc000457540) Stream added, broadcasting: 1\nI0220 00:21:03.288994    1621 log.go:172] (0xc0005c40b0) Reply frame received for 1\nI0220 00:21:03.289032    1621 log.go:172] (0xc0005c40b0) (0xc0008e8000) Create stream\nI0220 00:21:03.289041    1621 log.go:172] (0xc0005c40b0) (0xc0008e8000) Stream added, broadcasting: 3\nI0220 00:21:03.291642    1621 log.go:172] (0xc0005c40b0) Reply frame received for 3\nI0220 00:21:03.291699    1621 log.go:172] (0xc0005c40b0) (0xc00063dc20) Create stream\nI0220 00:21:03.291718    1621 log.go:172] (0xc0005c40b0) (0xc00063dc20) Stream added, broadcasting: 5\nI0220 00:21:03.293241    1621 log.go:172] (0xc0005c40b0) Reply frame received for 5\nI0220 00:21:03.641500    1621 log.go:172] (0xc0005c40b0) Data frame received for 3\nI0220 00:21:03.641598    1621 log.go:172] (0xc0008e8000) (3) Data frame handling\nI0220 00:21:03.641628    1621 log.go:172] (0xc0008e8000) (3) Data frame sent\nI0220 00:21:03.641708    1621 log.go:172] (0xc0005c40b0) Data frame received for 5\nI0220 00:21:03.641756    1621 log.go:172] (0xc00063dc20) (5) Data frame handling\nI0220 00:21:03.641777    1621 log.go:172] (0xc00063dc20) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0220 00:21:03.725731    1621 log.go:172] (0xc0005c40b0) Data frame received for 1\nI0220 00:21:03.725793    1621 log.go:172] (0xc0005c40b0) (0xc0008e8000) Stream removed, broadcasting: 3\nI0220 00:21:03.725848    1621 log.go:172] (0xc000457540) (1) Data frame handling\nI0220 00:21:03.725870    1621 log.go:172] (0xc000457540) (1) Data frame sent\nI0220 00:21:03.725916    1621 log.go:172] (0xc0005c40b0) (0xc00063dc20) Stream removed, broadcasting: 5\nI0220 00:21:03.725951    1621 log.go:172] (0xc0005c40b0) (0xc000457540) Stream removed, broadcasting: 1\nI0220 00:21:03.725972    1621 log.go:172] (0xc0005c40b0) Go away received\nI0220 00:21:03.726973    1621 log.go:172] (0xc0005c40b0) (0xc000457540) Stream removed, broadcasting: 1\nI0220 00:21:03.726990    1621 log.go:172] (0xc0005c40b0) (0xc0008e8000) Stream removed, broadcasting: 3\nI0220 00:21:03.727000    1621 log.go:172] (0xc0005c40b0) (0xc00063dc20) Stream removed, broadcasting: 5\n"
Feb 20 00:21:03.740: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 20 00:21:03.740: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 20 00:21:03.745: INFO: Found 1 stateful pods, waiting for 3
Feb 20 00:21:13.754: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 20 00:21:13.754: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 20 00:21:13.754: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 20 00:21:23.754: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 20 00:21:23.754: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 20 00:21:23.754: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb 20 00:21:23.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-787 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 20 00:21:24.265: INFO: stderr: "I0220 00:21:24.037027    1639 log.go:172] (0xc00043b760) (0xc000ac0000) Create stream\nI0220 00:21:24.037552    1639 log.go:172] (0xc00043b760) (0xc000ac0000) Stream added, broadcasting: 1\nI0220 00:21:24.045431    1639 log.go:172] (0xc00043b760) Reply frame received for 1\nI0220 00:21:24.045508    1639 log.go:172] (0xc00043b760) (0xc000b18000) Create stream\nI0220 00:21:24.045530    1639 log.go:172] (0xc00043b760) (0xc000b18000) Stream added, broadcasting: 3\nI0220 00:21:24.047839    1639 log.go:172] (0xc00043b760) Reply frame received for 3\nI0220 00:21:24.047875    1639 log.go:172] (0xc00043b760) (0xc000938000) Create stream\nI0220 00:21:24.047887    1639 log.go:172] (0xc00043b760) (0xc000938000) Stream added, broadcasting: 5\nI0220 00:21:24.050335    1639 log.go:172] (0xc00043b760) Reply frame received for 5\nI0220 00:21:24.158657    1639 log.go:172] (0xc00043b760) Data frame received for 3\nI0220 00:21:24.159008    1639 log.go:172] (0xc000b18000) (3) Data frame handling\nI0220 00:21:24.159134    1639 log.go:172] (0xc000b18000) (3) Data frame sent\nI0220 00:21:24.160001    1639 log.go:172] (0xc00043b760) Data frame received for 5\nI0220 00:21:24.160068    1639 log.go:172] (0xc000938000) (5) Data frame handling\nI0220 00:21:24.160123    1639 log.go:172] (0xc000938000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0220 00:21:24.248421    1639 log.go:172] (0xc00043b760) Data frame received for 1\nI0220 00:21:24.248937    1639 log.go:172] (0xc000ac0000) (1) Data frame handling\nI0220 00:21:24.249024    1639 log.go:172] (0xc000ac0000) (1) Data frame sent\nI0220 00:21:24.249161    1639 log.go:172] (0xc00043b760) (0xc000ac0000) Stream removed, broadcasting: 1\nI0220 00:21:24.249817    1639 log.go:172] (0xc00043b760) (0xc000b18000) Stream removed, broadcasting: 3\nI0220 00:21:24.249892    1639 log.go:172] (0xc00043b760) (0xc000938000) Stream removed, broadcasting: 5\nI0220 00:21:24.249955    1639 log.go:172] (0xc00043b760) Go away received\nI0220 00:21:24.249986    1639 log.go:172] (0xc00043b760) (0xc000ac0000) Stream removed, broadcasting: 1\nI0220 00:21:24.250000    1639 log.go:172] (0xc00043b760) (0xc000b18000) Stream removed, broadcasting: 3\nI0220 00:21:24.250004    1639 log.go:172] (0xc00043b760) (0xc000938000) Stream removed, broadcasting: 5\n"
Feb 20 00:21:24.266: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 20 00:21:24.266: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 20 00:21:24.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-787 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 20 00:21:24.682: INFO: stderr: "I0220 00:21:24.406122    1656 log.go:172] (0xc00098cf20) (0xc000972500) Create stream\nI0220 00:21:24.406347    1656 log.go:172] (0xc00098cf20) (0xc000972500) Stream added, broadcasting: 1\nI0220 00:21:24.412109    1656 log.go:172] (0xc00098cf20) Reply frame received for 1\nI0220 00:21:24.412147    1656 log.go:172] (0xc00098cf20) (0xc000621d60) Create stream\nI0220 00:21:24.412174    1656 log.go:172] (0xc00098cf20) (0xc000621d60) Stream added, broadcasting: 3\nI0220 00:21:24.413101    1656 log.go:172] (0xc00098cf20) Reply frame received for 3\nI0220 00:21:24.413117    1656 log.go:172] (0xc00098cf20) (0xc000621e00) Create stream\nI0220 00:21:24.413121    1656 log.go:172] (0xc00098cf20) (0xc000621e00) Stream added, broadcasting: 5\nI0220 00:21:24.413816    1656 log.go:172] (0xc00098cf20) Reply frame received for 5\nI0220 00:21:24.498260    1656 log.go:172] (0xc00098cf20) Data frame received for 5\nI0220 00:21:24.498352    1656 log.go:172] (0xc000621e00) (5) Data frame handling\nI0220 00:21:24.498395    1656 log.go:172] (0xc000621e00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0220 00:21:24.543781    1656 log.go:172] (0xc00098cf20) Data frame received for 3\nI0220 00:21:24.544088    1656 log.go:172] (0xc000621d60) (3) Data frame handling\nI0220 00:21:24.544190    1656 log.go:172] (0xc000621d60) (3) Data frame sent\nI0220 00:21:24.667266    1656 log.go:172] (0xc00098cf20) (0xc000621e00) Stream removed, broadcasting: 5\nI0220 00:21:24.667377    1656 log.go:172] (0xc00098cf20) Data frame received for 1\nI0220 00:21:24.667414    1656 log.go:172] (0xc00098cf20) (0xc000621d60) Stream removed, broadcasting: 3\nI0220 00:21:24.667476    1656 log.go:172] (0xc000972500) (1) Data frame handling\nI0220 00:21:24.667520    1656 log.go:172] (0xc000972500) (1) Data frame sent\nI0220 00:21:24.667537    1656 log.go:172] (0xc00098cf20) (0xc000972500) Stream removed, broadcasting: 1\nI0220 00:21:24.667556    1656 log.go:172] (0xc00098cf20) Go away received\nI0220 00:21:24.668654    1656 log.go:172] (0xc00098cf20) (0xc000972500) Stream removed, broadcasting: 1\nI0220 00:21:24.668675    1656 log.go:172] (0xc00098cf20) (0xc000621d60) Stream removed, broadcasting: 3\nI0220 00:21:24.668685    1656 log.go:172] (0xc00098cf20) (0xc000621e00) Stream removed, broadcasting: 5\n"
Feb 20 00:21:24.683: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 20 00:21:24.683: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 20 00:21:24.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-787 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 20 00:21:25.093: INFO: stderr: "I0220 00:21:24.832238    1675 log.go:172] (0xc0000f5550) (0xc0007abb80) Create stream\nI0220 00:21:24.832307    1675 log.go:172] (0xc0000f5550) (0xc0007abb80) Stream added, broadcasting: 1\nI0220 00:21:24.838970    1675 log.go:172] (0xc0000f5550) Reply frame received for 1\nI0220 00:21:24.839087    1675 log.go:172] (0xc0000f5550) (0xc0007abd60) Create stream\nI0220 00:21:24.839107    1675 log.go:172] (0xc0000f5550) (0xc0007abd60) Stream added, broadcasting: 3\nI0220 00:21:24.841127    1675 log.go:172] (0xc0000f5550) Reply frame received for 3\nI0220 00:21:24.841181    1675 log.go:172] (0xc0000f5550) (0xc00098c000) Create stream\nI0220 00:21:24.841189    1675 log.go:172] (0xc0000f5550) (0xc00098c000) Stream added, broadcasting: 5\nI0220 00:21:24.842744    1675 log.go:172] (0xc0000f5550) Reply frame received for 5\nI0220 00:21:24.978001    1675 log.go:172] (0xc0000f5550) Data frame received for 5\nI0220 00:21:24.978094    1675 log.go:172] (0xc00098c000) (5) Data frame handling\nI0220 00:21:24.978168    1675 log.go:172] (0xc00098c000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0220 00:21:24.997688    1675 log.go:172] (0xc0000f5550) Data frame received for 3\nI0220 00:21:24.997870    1675 log.go:172] (0xc0007abd60) (3) Data frame handling\nI0220 00:21:24.997946    1675 log.go:172] (0xc0007abd60) (3) Data frame sent\nI0220 00:21:25.079643    1675 log.go:172] (0xc0000f5550) (0xc0007abd60) Stream removed, broadcasting: 3\nI0220 00:21:25.079729    1675 log.go:172] (0xc0000f5550) Data frame received for 1\nI0220 00:21:25.079744    1675 log.go:172] (0xc0007abb80) (1) Data frame handling\nI0220 00:21:25.079767    1675 log.go:172] (0xc0007abb80) (1) Data frame sent\nI0220 00:21:25.079777    1675 log.go:172] (0xc0000f5550) (0xc0007abb80) Stream removed, broadcasting: 1\nI0220 00:21:25.080347    1675 log.go:172] (0xc0000f5550) (0xc00098c000) Stream removed, broadcasting: 5\nI0220 00:21:25.080383    1675 log.go:172] (0xc0000f5550) (0xc0007abb80) Stream removed, broadcasting: 1\nI0220 00:21:25.080394    1675 log.go:172] (0xc0000f5550) (0xc0007abd60) Stream removed, broadcasting: 3\nI0220 00:21:25.080402    1675 log.go:172] (0xc0000f5550) (0xc00098c000) Stream removed, broadcasting: 5\n"
Feb 20 00:21:25.094: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 20 00:21:25.094: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 20 00:21:25.094: INFO: Waiting for statefulset status.replicas updated to 0
Feb 20 00:21:25.141: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Feb 20 00:21:35.150: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 20 00:21:35.150: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 20 00:21:35.150: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 20 00:21:35.601: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999412s
Feb 20 00:21:36.618: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.5546927s
Feb 20 00:21:37.627: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.537611799s
Feb 20 00:21:38.635: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.528251214s
Feb 20 00:21:39.643: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.52018083s
Feb 20 00:21:40.775: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.512294696s
Feb 20 00:21:41.808: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.380317267s
Feb 20 00:21:42.852: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.347171515s
Feb 20 00:21:43.860: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.30367843s
Feb 20 00:21:44.873: INFO: Verifying statefulset ss doesn't scale past 3 for another 295.236902ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-787
Feb 20 00:21:45.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 00:21:46.248: INFO: stderr: "I0220 00:21:46.072830    1696 log.go:172] (0xc000939080) (0xc0008f2500) Create stream\nI0220 00:21:46.073750    1696 log.go:172] (0xc000939080) (0xc0008f2500) Stream added, broadcasting: 1\nI0220 00:21:46.082153    1696 log.go:172] (0xc000939080) Reply frame received for 1\nI0220 00:21:46.082185    1696 log.go:172] (0xc000939080) (0xc000620820) Create stream\nI0220 00:21:46.082193    1696 log.go:172] (0xc000939080) (0xc000620820) Stream added, broadcasting: 3\nI0220 00:21:46.083676    1696 log.go:172] (0xc000939080) Reply frame received for 3\nI0220 00:21:46.083740    1696 log.go:172] (0xc000939080) (0xc0008ee000) Create stream\nI0220 00:21:46.083748    1696 log.go:172] (0xc000939080) (0xc0008ee000) Stream added, broadcasting: 5\nI0220 00:21:46.085180    1696 log.go:172] (0xc000939080) Reply frame received for 5\nI0220 00:21:46.176819    1696 log.go:172] (0xc000939080) Data frame received for 5\nI0220 00:21:46.176869    1696 log.go:172] (0xc0008ee000) (5) Data frame handling\nI0220 00:21:46.176885    1696 log.go:172] (0xc0008ee000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0220 00:21:46.176914    1696 log.go:172] (0xc000939080) Data frame received for 3\nI0220 00:21:46.176921    1696 log.go:172] (0xc000620820) (3) Data frame handling\nI0220 00:21:46.176929    1696 log.go:172] (0xc000620820) (3) Data frame sent\nI0220 00:21:46.239968    1696 log.go:172] (0xc000939080) Data frame received for 1\nI0220 00:21:46.240103    1696 log.go:172] (0xc000939080) (0xc0008ee000) Stream removed, broadcasting: 5\nI0220 00:21:46.240178    1696 log.go:172] (0xc0008f2500) (1) Data frame handling\nI0220 00:21:46.240220    1696 log.go:172] (0xc0008f2500) (1) Data frame sent\nI0220 00:21:46.240381    1696 log.go:172] (0xc000939080) (0xc000620820) Stream removed, broadcasting: 3\nI0220 00:21:46.240431    1696 log.go:172] (0xc000939080) (0xc0008f2500) Stream removed, broadcasting: 1\nI0220 00:21:46.240449    1696 log.go:172] (0xc000939080) Go away received\nI0220 00:21:46.241448    1696 log.go:172] (0xc000939080) (0xc0008f2500) Stream removed, broadcasting: 1\nI0220 00:21:46.241464    1696 log.go:172] (0xc000939080) (0xc000620820) Stream removed, broadcasting: 3\nI0220 00:21:46.241471    1696 log.go:172] (0xc000939080) (0xc0008ee000) Stream removed, broadcasting: 5\n"
Feb 20 00:21:46.248: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 20 00:21:46.248: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 20 00:21:46.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 00:21:46.625: INFO: stderr: "I0220 00:21:46.413538    1716 log.go:172] (0xc0009e66e0) (0xc00070df40) Create stream\nI0220 00:21:46.413705    1716 log.go:172] (0xc0009e66e0) (0xc00070df40) Stream added, broadcasting: 1\nI0220 00:21:46.416676    1716 log.go:172] (0xc0009e66e0) Reply frame received for 1\nI0220 00:21:46.416704    1716 log.go:172] (0xc0009e66e0) (0xc00066a820) Create stream\nI0220 00:21:46.416710    1716 log.go:172] (0xc0009e66e0) (0xc00066a820) Stream added, broadcasting: 3\nI0220 00:21:46.417724    1716 log.go:172] (0xc0009e66e0) Reply frame received for 3\nI0220 00:21:46.417744    1716 log.go:172] (0xc0009e66e0) (0xc0002ab4a0) Create stream\nI0220 00:21:46.417752    1716 log.go:172] (0xc0009e66e0) (0xc0002ab4a0) Stream added, broadcasting: 5\nI0220 00:21:46.419108    1716 log.go:172] (0xc0009e66e0) Reply frame received for 5\nI0220 00:21:46.502081    1716 log.go:172] (0xc0009e66e0) Data frame received for 5\nI0220 00:21:46.502155    1716 log.go:172] (0xc0002ab4a0) (5) Data frame handling\nI0220 00:21:46.502173    1716 log.go:172] (0xc0002ab4a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0220 00:21:46.502264    1716 log.go:172] (0xc0009e66e0) Data frame received for 3\nI0220 00:21:46.502285    1716 log.go:172] (0xc00066a820) (3) Data frame handling\nI0220 00:21:46.502313    1716 log.go:172] (0xc00066a820) (3) Data frame sent\nI0220 00:21:46.613883    1716 log.go:172] (0xc0009e66e0) (0xc00066a820) Stream removed, broadcasting: 3\nI0220 00:21:46.614057    1716 log.go:172] (0xc0009e66e0) Data frame received for 1\nI0220 00:21:46.614077    1716 log.go:172] (0xc00070df40) (1) Data frame handling\nI0220 00:21:46.614167    1716 log.go:172] (0xc00070df40) (1) Data frame sent\nI0220 00:21:46.614195    1716 log.go:172] (0xc0009e66e0) (0xc00070df40) Stream removed, broadcasting: 1\nI0220 00:21:46.614590    1716 log.go:172] (0xc0009e66e0) (0xc0002ab4a0) Stream removed, broadcasting: 5\nI0220 00:21:46.614698    1716 log.go:172] (0xc0009e66e0) Go away received\nI0220 00:21:46.615749    1716 log.go:172] (0xc0009e66e0) (0xc00070df40) Stream removed, broadcasting: 1\nI0220 00:21:46.615797    1716 log.go:172] (0xc0009e66e0) (0xc00066a820) Stream removed, broadcasting: 3\nI0220 00:21:46.615812    1716 log.go:172] (0xc0009e66e0) (0xc0002ab4a0) Stream removed, broadcasting: 5\n"
Feb 20 00:21:46.625: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 20 00:21:46.625: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 20 00:21:46.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-787 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 00:21:46.938: INFO: stderr: "I0220 00:21:46.759290    1737 log.go:172] (0xc000b10d10) (0xc00096e280) Create stream\nI0220 00:21:46.759400    1737 log.go:172] (0xc000b10d10) (0xc00096e280) Stream added, broadcasting: 1\nI0220 00:21:46.768987    1737 log.go:172] (0xc000b10d10) Reply frame received for 1\nI0220 00:21:46.769019    1737 log.go:172] (0xc000b10d10) (0xc000699d60) Create stream\nI0220 00:21:46.769029    1737 log.go:172] (0xc000b10d10) (0xc000699d60) Stream added, broadcasting: 3\nI0220 00:21:46.770163    1737 log.go:172] (0xc000b10d10) Reply frame received for 3\nI0220 00:21:46.770199    1737 log.go:172] (0xc000b10d10) (0xc0005f6960) Create stream\nI0220 00:21:46.770218    1737 log.go:172] (0xc000b10d10) (0xc0005f6960) Stream added, broadcasting: 5\nI0220 00:21:46.771604    1737 log.go:172] (0xc000b10d10) Reply frame received for 5\nI0220 00:21:46.839078    1737 log.go:172] (0xc000b10d10) Data frame received for 5\nI0220 00:21:46.839094    1737 log.go:172] (0xc0005f6960) (5) Data frame handling\nI0220 00:21:46.839112    1737 log.go:172] (0xc0005f6960) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0220 00:21:46.839315    1737 log.go:172] (0xc000b10d10) Data frame received for 3\nI0220 00:21:46.839338    1737 log.go:172] (0xc000699d60) (3) Data frame handling\nI0220 00:21:46.839362    1737 log.go:172] (0xc000699d60) (3) Data frame sent\nI0220 00:21:46.926891    1737 log.go:172] (0xc000b10d10) (0xc000699d60) Stream removed, broadcasting: 3\nI0220 00:21:46.927512    1737 log.go:172] (0xc000b10d10) Data frame received for 1\nI0220 00:21:46.927605    1737 log.go:172] (0xc000b10d10) (0xc0005f6960) Stream removed, broadcasting: 5\nI0220 00:21:46.927883    1737 log.go:172] (0xc00096e280) (1) Data frame handling\nI0220 00:21:46.928048    1737 log.go:172] (0xc00096e280) (1) Data frame sent\nI0220 00:21:46.928146    1737 log.go:172] (0xc000b10d10) (0xc00096e280) Stream removed, broadcasting: 1\nI0220 00:21:46.928222    1737 log.go:172] (0xc000b10d10) Go away received\nI0220 00:21:46.929219    1737 log.go:172] (0xc000b10d10) (0xc00096e280) Stream removed, broadcasting: 1\nI0220 00:21:46.929242    1737 log.go:172] (0xc000b10d10) (0xc000699d60) Stream removed, broadcasting: 3\nI0220 00:21:46.929262    1737 log.go:172] (0xc000b10d10) (0xc0005f6960) Stream removed, broadcasting: 5\n"
Feb 20 00:21:46.938: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 20 00:21:46.938: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 20 00:21:46.938: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Feb 20 00:22:06.971: INFO: Deleting all statefulset in ns statefulset-787
Feb 20 00:22:06.978: INFO: Scaling statefulset ss to 0
Feb 20 00:22:06.992: INFO: Waiting for statefulset status.replicas updated to 0
Feb 20 00:22:06.995: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:22:07.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-787" for this suite.

• [SLOW TEST:94.976 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":280,"completed":104,"skipped":1582,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:22:07.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 20 00:22:07.127: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bcdc1c3a-eee2-4cd8-a371-9c09720dac82" in namespace "projected-3322" to be "success or failure"
Feb 20 00:22:07.130: INFO: Pod "downwardapi-volume-bcdc1c3a-eee2-4cd8-a371-9c09720dac82": Phase="Pending", Reason="", readiness=false. Elapsed: 3.039938ms
Feb 20 00:22:09.139: INFO: Pod "downwardapi-volume-bcdc1c3a-eee2-4cd8-a371-9c09720dac82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012182586s
Feb 20 00:22:11.146: INFO: Pod "downwardapi-volume-bcdc1c3a-eee2-4cd8-a371-9c09720dac82": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018347683s
Feb 20 00:22:13.150: INFO: Pod "downwardapi-volume-bcdc1c3a-eee2-4cd8-a371-9c09720dac82": Phase="Pending", Reason="", readiness=false. Elapsed: 6.022611034s
Feb 20 00:22:15.155: INFO: Pod "downwardapi-volume-bcdc1c3a-eee2-4cd8-a371-9c09720dac82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.027756423s
STEP: Saw pod success
Feb 20 00:22:15.155: INFO: Pod "downwardapi-volume-bcdc1c3a-eee2-4cd8-a371-9c09720dac82" satisfied condition "success or failure"
Feb 20 00:22:15.158: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-bcdc1c3a-eee2-4cd8-a371-9c09720dac82 container client-container: 
STEP: delete the pod
Feb 20 00:22:15.268: INFO: Waiting for pod downwardapi-volume-bcdc1c3a-eee2-4cd8-a371-9c09720dac82 to disappear
Feb 20 00:22:15.293: INFO: Pod downwardapi-volume-bcdc1c3a-eee2-4cd8-a371-9c09720dac82 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:22:15.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3322" for this suite.

• [SLOW TEST:8.291 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":280,"completed":105,"skipped":1590,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:22:15.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 20 00:22:16.195: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 20 00:22:18.218: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754936, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754936, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754936, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754936, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:22:20.225: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754936, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754936, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754936, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754936, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:22:22.225: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754936, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754936, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754936, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717754936, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 20 00:22:25.273: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:22:25.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8125" for this suite.
STEP: Destroying namespace "webhook-8125-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:10.295 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":280,"completed":106,"skipped":1606,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:22:25.622: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a replication controller
Feb 20 00:22:25.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3929'
Feb 20 00:22:26.094: INFO: stderr: ""
Feb 20 00:22:26.094: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 20 00:22:26.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3929'
Feb 20 00:22:26.259: INFO: stderr: ""
Feb 20 00:22:26.259: INFO: stdout: "update-demo-nautilus-xk7h4 update-demo-nautilus-z24fk "
Feb 20 00:22:26.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xk7h4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3929'
Feb 20 00:22:26.956: INFO: stderr: ""
Feb 20 00:22:26.956: INFO: stdout: ""
Feb 20 00:22:26.956: INFO: update-demo-nautilus-xk7h4 is created but not running
Feb 20 00:22:31.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3929'
Feb 20 00:22:33.552: INFO: stderr: ""
Feb 20 00:22:33.553: INFO: stdout: "update-demo-nautilus-xk7h4 update-demo-nautilus-z24fk "
Feb 20 00:22:33.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xk7h4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3929'
Feb 20 00:22:34.141: INFO: stderr: ""
Feb 20 00:22:34.141: INFO: stdout: ""
Feb 20 00:22:34.141: INFO: update-demo-nautilus-xk7h4 is created but not running
Feb 20 00:22:39.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3929'
Feb 20 00:22:39.275: INFO: stderr: ""
Feb 20 00:22:39.275: INFO: stdout: "update-demo-nautilus-xk7h4 update-demo-nautilus-z24fk "
Feb 20 00:22:39.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xk7h4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3929'
Feb 20 00:22:39.425: INFO: stderr: ""
Feb 20 00:22:39.425: INFO: stdout: "true"
Feb 20 00:22:39.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xk7h4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3929'
Feb 20 00:22:39.555: INFO: stderr: ""
Feb 20 00:22:39.555: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 20 00:22:39.555: INFO: validating pod update-demo-nautilus-xk7h4
Feb 20 00:22:39.565: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 20 00:22:39.565: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 20 00:22:39.565: INFO: update-demo-nautilus-xk7h4 is verified up and running
Feb 20 00:22:39.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z24fk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3929'
Feb 20 00:22:39.642: INFO: stderr: ""
Feb 20 00:22:39.642: INFO: stdout: "true"
Feb 20 00:22:39.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z24fk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3929'
Feb 20 00:22:39.730: INFO: stderr: ""
Feb 20 00:22:39.730: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 20 00:22:39.730: INFO: validating pod update-demo-nautilus-z24fk
Feb 20 00:22:39.735: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 20 00:22:39.735: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 20 00:22:39.735: INFO: update-demo-nautilus-z24fk is verified up and running
STEP: using delete to clean up resources
Feb 20 00:22:39.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3929'
Feb 20 00:22:39.830: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 20 00:22:39.830: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 20 00:22:39.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3929'
Feb 20 00:22:40.055: INFO: stderr: "No resources found in kubectl-3929 namespace.\n"
Feb 20 00:22:40.055: INFO: stdout: ""
Feb 20 00:22:40.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3929 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 20 00:22:40.208: INFO: stderr: ""
Feb 20 00:22:40.208: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:22:40.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3929" for this suite.

• [SLOW TEST:14.595 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":280,"completed":107,"skipped":1619,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:22:40.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-1db5a481-70c1-4c76-b756-d100b34cba17
STEP: Creating a pod to test consume secrets
Feb 20 00:22:41.641: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-666d4a0e-f19c-45f4-a34a-a47237507d0a" in namespace "projected-3268" to be "success or failure"
Feb 20 00:22:41.733: INFO: Pod "pod-projected-secrets-666d4a0e-f19c-45f4-a34a-a47237507d0a": Phase="Pending", Reason="", readiness=false. Elapsed: 91.879726ms
Feb 20 00:22:43.747: INFO: Pod "pod-projected-secrets-666d4a0e-f19c-45f4-a34a-a47237507d0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10599247s
Feb 20 00:22:45.756: INFO: Pod "pod-projected-secrets-666d4a0e-f19c-45f4-a34a-a47237507d0a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114863331s
Feb 20 00:22:47.765: INFO: Pod "pod-projected-secrets-666d4a0e-f19c-45f4-a34a-a47237507d0a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123990811s
Feb 20 00:22:49.787: INFO: Pod "pod-projected-secrets-666d4a0e-f19c-45f4-a34a-a47237507d0a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.145893154s
Feb 20 00:22:51.800: INFO: Pod "pod-projected-secrets-666d4a0e-f19c-45f4-a34a-a47237507d0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.15894208s
STEP: Saw pod success
Feb 20 00:22:51.800: INFO: Pod "pod-projected-secrets-666d4a0e-f19c-45f4-a34a-a47237507d0a" satisfied condition "success or failure"
Feb 20 00:22:51.805: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-666d4a0e-f19c-45f4-a34a-a47237507d0a container projected-secret-volume-test: 
STEP: delete the pod
Feb 20 00:22:52.036: INFO: Waiting for pod pod-projected-secrets-666d4a0e-f19c-45f4-a34a-a47237507d0a to disappear
Feb 20 00:22:52.058: INFO: Pod pod-projected-secrets-666d4a0e-f19c-45f4-a34a-a47237507d0a no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:22:52.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3268" for this suite.

• [SLOW TEST:11.857 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":108,"skipped":1624,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:22:52.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0220 00:23:32.894871       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 20 00:23:32.895: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:23:32.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6185" for this suite.

• [SLOW TEST:40.843 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":280,"completed":109,"skipped":1632,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:23:32.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 20 00:23:33.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb 20 00:23:33.275: INFO: stderr: ""
Feb 20 00:23:33.275: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18+\", GitVersion:\"v1.18.0-alpha.2.152+426b3538900329\", GitCommit:\"426b3538900329ed2ce5a0cb1cccf2f0ff32db60\", GitTreeState:\"clean\", BuildDate:\"2020-01-25T12:55:25Z\", GoVersion:\"go1.13.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T21:12:17Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:23:33.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9476" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":280,"completed":110,"skipped":1639,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:23:33.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 20 00:23:33.369: INFO: Waiting up to 5m0s for pod "pod-8006ec67-bf83-4383-81bb-fe6e7cb61d58" in namespace "emptydir-9084" to be "success or failure"
Feb 20 00:23:33.413: INFO: Pod "pod-8006ec67-bf83-4383-81bb-fe6e7cb61d58": Phase="Pending", Reason="", readiness=false. Elapsed: 44.342992ms
Feb 20 00:23:35.423: INFO: Pod "pod-8006ec67-bf83-4383-81bb-fe6e7cb61d58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054153035s
Feb 20 00:23:38.023: INFO: Pod "pod-8006ec67-bf83-4383-81bb-fe6e7cb61d58": Phase="Pending", Reason="", readiness=false. Elapsed: 4.654374972s
Feb 20 00:23:40.059: INFO: Pod "pod-8006ec67-bf83-4383-81bb-fe6e7cb61d58": Phase="Pending", Reason="", readiness=false. Elapsed: 6.690126551s
Feb 20 00:23:43.333: INFO: Pod "pod-8006ec67-bf83-4383-81bb-fe6e7cb61d58": Phase="Pending", Reason="", readiness=false. Elapsed: 9.964338115s
Feb 20 00:23:45.340: INFO: Pod "pod-8006ec67-bf83-4383-81bb-fe6e7cb61d58": Phase="Pending", Reason="", readiness=false. Elapsed: 11.97059111s
Feb 20 00:23:47.943: INFO: Pod "pod-8006ec67-bf83-4383-81bb-fe6e7cb61d58": Phase="Pending", Reason="", readiness=false. Elapsed: 14.574138385s
Feb 20 00:23:50.085: INFO: Pod "pod-8006ec67-bf83-4383-81bb-fe6e7cb61d58": Phase="Pending", Reason="", readiness=false. Elapsed: 16.715951296s
Feb 20 00:23:52.168: INFO: Pod "pod-8006ec67-bf83-4383-81bb-fe6e7cb61d58": Phase="Pending", Reason="", readiness=false. Elapsed: 18.799463338s
Feb 20 00:23:54.209: INFO: Pod "pod-8006ec67-bf83-4383-81bb-fe6e7cb61d58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.840553283s
STEP: Saw pod success
Feb 20 00:23:54.210: INFO: Pod "pod-8006ec67-bf83-4383-81bb-fe6e7cb61d58" satisfied condition "success or failure"
Feb 20 00:23:54.215: INFO: Trying to get logs from node jerma-node pod pod-8006ec67-bf83-4383-81bb-fe6e7cb61d58 container test-container: 
STEP: delete the pod
Feb 20 00:23:54.291: INFO: Waiting for pod pod-8006ec67-bf83-4383-81bb-fe6e7cb61d58 to disappear
Feb 20 00:23:54.301: INFO: Pod pod-8006ec67-bf83-4383-81bb-fe6e7cb61d58 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:23:54.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9084" for this suite.

• [SLOW TEST:21.034 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":111,"skipped":1640,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:23:54.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 20 00:24:02.605: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:24:02.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3879" for this suite.

• [SLOW TEST:8.790 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":112,"skipped":1662,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:24:03.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:24:50.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3090" for this suite.

• [SLOW TEST:47.476 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":280,"completed":113,"skipped":1692,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:24:50.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 20 00:24:51.831: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 20 00:24:53.850: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755091, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755091, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755092, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755091, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:24:55.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755091, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755091, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755092, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755091, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:24:57.861: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755091, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755091, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755092, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755091, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:24:59.860: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755091, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755091, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755092, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755091, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 20 00:25:02.989: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 20 00:25:02.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9956-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:25:04.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1622" for this suite.
STEP: Destroying namespace "webhook-1622-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:13.561 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":280,"completed":114,"skipped":1709,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:25:04.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 20 00:25:04.389: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb 20 00:25:09.435: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 20 00:25:13.447: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Feb 20 00:25:13.506: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-1862 /apis/apps/v1/namespaces/deployment-1862/deployments/test-cleanup-deployment 8834cd3e-378a-41b9-ad9d-5d4d71d6d3aa 9502281 1 2020-02-20 00:25:13 +0000 UTC   map[name:cleanup-pod] map[] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0032ef6d8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},}

Feb 20 00:25:13.545: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-1862 /apis/apps/v1/namespaces/deployment-1862/replicasets/test-cleanup-deployment-55ffc6b7b6 52fed576-7749-4f89-8a03-09b57f883471 9502283 1 2020-02-20 00:25:13 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 8834cd3e-378a-41b9-ad9d-5d4d71d6d3aa 0xc002eb62d7 0xc002eb62d8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002eb6398  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 20 00:25:13.545: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Feb 20 00:25:13.546: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller  deployment-1862 /apis/apps/v1/namespaces/deployment-1862/replicasets/test-cleanup-controller d8aab539-a9b3-4f7d-abba-4a08f79d79dd 9502282 1 2020-02-20 00:25:04 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 8834cd3e-378a-41b9-ad9d-5d4d71d6d3aa 0xc002eb6127 0xc002eb6128}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002eb61e8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Feb 20 00:25:13.649: INFO: Pod "test-cleanup-controller-9vljl" is available:
&Pod{ObjectMeta:{test-cleanup-controller-9vljl test-cleanup-controller- deployment-1862 /api/v1/namespaces/deployment-1862/pods/test-cleanup-controller-9vljl dbd28fd7-f224-4009-91dd-f7acb14f321e 9502278 0 2020-02-20 00:25:04 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller d8aab539-a9b3-4f7d-abba-4a08f79d79dd 0xc002eb6cb7 0xc002eb6cb8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8mrzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8mrzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8mrzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:25:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:25:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:25:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:25:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-20 00:25:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-20 00:25:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://1bd6284358debc7a3c42542aba0b39ba88b2184f635ad14987f9a1728934d538,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 20 00:25:13.650: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-gzzm6" is not available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-gzzm6 test-cleanup-deployment-55ffc6b7b6- deployment-1862 /api/v1/namespaces/deployment-1862/pods/test-cleanup-deployment-55ffc6b7b6-gzzm6 1724c273-ab57-453e-b1d4-86eddcf2a0a2 9502288 0 2020-02-20 00:25:13 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 52fed576-7749-4f89-8a03-09b57f883471 0xc002eb6f17 0xc002eb6f18}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8mrzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8mrzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8mrzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:25:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:25:13.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1862" for this suite.

• [SLOW TEST:9.544 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":280,"completed":115,"skipped":1710,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:25:13.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod liveness-fd3aa2d2-85c7-41c7-94a9-80431f564e81 in namespace container-probe-1932
Feb 20 00:25:27.883: INFO: Started pod liveness-fd3aa2d2-85c7-41c7-94a9-80431f564e81 in namespace container-probe-1932
STEP: checking the pod's current state and verifying that restartCount is present
Feb 20 00:25:27.887: INFO: Initial restart count of pod liveness-fd3aa2d2-85c7-41c7-94a9-80431f564e81 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:29:29.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1932" for this suite.

• [SLOW TEST:255.698 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":280,"completed":116,"skipped":1728,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:29:29.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Starting the proxy
Feb 20 00:29:29.762: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix045035102/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:29:29.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-315" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":280,"completed":117,"skipped":1736,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:29:29.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Feb 20 00:29:31.002: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755370, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755370, loc:(*time.Location)(0x7e52ca0)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-crd-conversion-webhook-deployment-78dcf5dd84\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755370, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755370, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:29:33.009: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755370, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755370, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755371, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755370, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:29:35.013: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755370, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755370, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755371, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755370, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:29:37.036: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755370, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755370, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755371, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755370, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 20 00:29:40.048: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 20 00:29:40.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:29:41.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-7683" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:12.044 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":280,"completed":118,"skipped":1775,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:29:42.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:29:42.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-5670" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":280,"completed":119,"skipped":1797,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:29:42.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb 20 00:29:51.224: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:29:52.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-338" for this suite.

• [SLOW TEST:10.176 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":280,"completed":120,"skipped":1799,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:29:52.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb 20 00:30:16.555: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1079 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 20 00:30:16.555: INFO: >>> kubeConfig: /root/.kube/config
I0220 00:30:16.626427       9 log.go:172] (0xc002b78210) (0xc000b3f040) Create stream
I0220 00:30:16.626764       9 log.go:172] (0xc002b78210) (0xc000b3f040) Stream added, broadcasting: 1
I0220 00:30:16.631526       9 log.go:172] (0xc002b78210) Reply frame received for 1
I0220 00:30:16.631656       9 log.go:172] (0xc002b78210) (0xc00216b400) Create stream
I0220 00:30:16.631680       9 log.go:172] (0xc002b78210) (0xc00216b400) Stream added, broadcasting: 3
I0220 00:30:16.633904       9 log.go:172] (0xc002b78210) Reply frame received for 3
I0220 00:30:16.633938       9 log.go:172] (0xc002b78210) (0xc0006d7d60) Create stream
I0220 00:30:16.633954       9 log.go:172] (0xc002b78210) (0xc0006d7d60) Stream added, broadcasting: 5
I0220 00:30:16.636089       9 log.go:172] (0xc002b78210) Reply frame received for 5
I0220 00:30:16.718339       9 log.go:172] (0xc002b78210) Data frame received for 3
I0220 00:30:16.718480       9 log.go:172] (0xc00216b400) (3) Data frame handling
I0220 00:30:16.718506       9 log.go:172] (0xc00216b400) (3) Data frame sent
I0220 00:30:16.791025       9 log.go:172] (0xc002b78210) Data frame received for 1
I0220 00:30:16.791163       9 log.go:172] (0xc002b78210) (0xc0006d7d60) Stream removed, broadcasting: 5
I0220 00:30:16.791221       9 log.go:172] (0xc000b3f040) (1) Data frame handling
I0220 00:30:16.791253       9 log.go:172] (0xc000b3f040) (1) Data frame sent
I0220 00:30:16.791353       9 log.go:172] (0xc002b78210) (0xc000b3f040) Stream removed, broadcasting: 1
I0220 00:30:16.791385       9 log.go:172] (0xc002b78210) (0xc00216b400) Stream removed, broadcasting: 3
I0220 00:30:16.791425       9 log.go:172] (0xc002b78210) Go away received
I0220 00:30:16.791480       9 log.go:172] (0xc002b78210) (0xc000b3f040) Stream removed, broadcasting: 1
I0220 00:30:16.791501       9 log.go:172] (0xc002b78210) (0xc00216b400) Stream removed, broadcasting: 3
I0220 00:30:16.791510       9 log.go:172] (0xc002b78210) (0xc0006d7d60) Stream removed, broadcasting: 5
Feb 20 00:30:16.791: INFO: Exec stderr: ""
Feb 20 00:30:16.791: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1079 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 20 00:30:16.791: INFO: >>> kubeConfig: /root/.kube/config
I0220 00:30:16.844829       9 log.go:172] (0xc00287d4a0) (0xc000e6b360) Create stream
I0220 00:30:16.845536       9 log.go:172] (0xc00287d4a0) (0xc000e6b360) Stream added, broadcasting: 1
I0220 00:30:16.856501       9 log.go:172] (0xc00287d4a0) Reply frame received for 1
I0220 00:30:16.856639       9 log.go:172] (0xc00287d4a0) (0xc000b3f2c0) Create stream
I0220 00:30:16.856664       9 log.go:172] (0xc00287d4a0) (0xc000b3f2c0) Stream added, broadcasting: 3
I0220 00:30:16.858958       9 log.go:172] (0xc00287d4a0) Reply frame received for 3
I0220 00:30:16.859041       9 log.go:172] (0xc00287d4a0) (0xc00216b680) Create stream
I0220 00:30:16.859078       9 log.go:172] (0xc00287d4a0) (0xc00216b680) Stream added, broadcasting: 5
I0220 00:30:16.861897       9 log.go:172] (0xc00287d4a0) Reply frame received for 5
I0220 00:30:16.938405       9 log.go:172] (0xc00287d4a0) Data frame received for 3
I0220 00:30:16.938488       9 log.go:172] (0xc000b3f2c0) (3) Data frame handling
I0220 00:30:16.938509       9 log.go:172] (0xc000b3f2c0) (3) Data frame sent
I0220 00:30:17.015992       9 log.go:172] (0xc00287d4a0) Data frame received for 1
I0220 00:30:17.016183       9 log.go:172] (0xc00287d4a0) (0xc00216b680) Stream removed, broadcasting: 5
I0220 00:30:17.016241       9 log.go:172] (0xc000e6b360) (1) Data frame handling
I0220 00:30:17.016333       9 log.go:172] (0xc000e6b360) (1) Data frame sent
I0220 00:30:17.016364       9 log.go:172] (0xc00287d4a0) (0xc000b3f2c0) Stream removed, broadcasting: 3
I0220 00:30:17.016456       9 log.go:172] (0xc00287d4a0) (0xc000e6b360) Stream removed, broadcasting: 1
I0220 00:30:17.016544       9 log.go:172] (0xc00287d4a0) Go away received
I0220 00:30:17.017108       9 log.go:172] (0xc00287d4a0) (0xc000e6b360) Stream removed, broadcasting: 1
I0220 00:30:17.017142       9 log.go:172] (0xc00287d4a0) (0xc000b3f2c0) Stream removed, broadcasting: 3
I0220 00:30:17.017163       9 log.go:172] (0xc00287d4a0) (0xc00216b680) Stream removed, broadcasting: 5
Feb 20 00:30:17.017: INFO: Exec stderr: ""
Feb 20 00:30:17.017: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1079 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 20 00:30:17.017: INFO: >>> kubeConfig: /root/.kube/config
I0220 00:30:17.059911       9 log.go:172] (0xc002ae8790) (0xc0002da780) Create stream
I0220 00:30:17.060332       9 log.go:172] (0xc002ae8790) (0xc0002da780) Stream added, broadcasting: 1
I0220 00:30:17.067411       9 log.go:172] (0xc002ae8790) Reply frame received for 1
I0220 00:30:17.067475       9 log.go:172] (0xc002ae8790) (0xc000b3f360) Create stream
I0220 00:30:17.067493       9 log.go:172] (0xc002ae8790) (0xc000b3f360) Stream added, broadcasting: 3
I0220 00:30:17.069517       9 log.go:172] (0xc002ae8790) Reply frame received for 3
I0220 00:30:17.069734       9 log.go:172] (0xc002ae8790) (0xc000e6b680) Create stream
I0220 00:30:17.069750       9 log.go:172] (0xc002ae8790) (0xc000e6b680) Stream added, broadcasting: 5
I0220 00:30:17.071954       9 log.go:172] (0xc002ae8790) Reply frame received for 5
I0220 00:30:17.147712       9 log.go:172] (0xc002ae8790) Data frame received for 3
I0220 00:30:17.147815       9 log.go:172] (0xc000b3f360) (3) Data frame handling
I0220 00:30:17.147842       9 log.go:172] (0xc000b3f360) (3) Data frame sent
I0220 00:30:17.215238       9 log.go:172] (0xc002ae8790) Data frame received for 1
I0220 00:30:17.215426       9 log.go:172] (0xc0002da780) (1) Data frame handling
I0220 00:30:17.215471       9 log.go:172] (0xc0002da780) (1) Data frame sent
I0220 00:30:17.215521       9 log.go:172] (0xc002ae8790) (0xc0002da780) Stream removed, broadcasting: 1
I0220 00:30:17.216194       9 log.go:172] (0xc002ae8790) (0xc000b3f360) Stream removed, broadcasting: 3
I0220 00:30:17.216372       9 log.go:172] (0xc002ae8790) (0xc000e6b680) Stream removed, broadcasting: 5
I0220 00:30:17.216645       9 log.go:172] (0xc002ae8790) (0xc0002da780) Stream removed, broadcasting: 1
I0220 00:30:17.216696       9 log.go:172] (0xc002ae8790) (0xc000b3f360) Stream removed, broadcasting: 3
I0220 00:30:17.216732       9 log.go:172] (0xc002ae8790) (0xc000e6b680) Stream removed, broadcasting: 5
I0220 00:30:17.216884       9 log.go:172] (0xc002ae8790) Go away received
Feb 20 00:30:17.216: INFO: Exec stderr: ""
Feb 20 00:30:17.217: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1079 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 20 00:30:17.217: INFO: >>> kubeConfig: /root/.kube/config
I0220 00:30:17.268449       9 log.go:172] (0xc00287dad0) (0xc000e6bc20) Create stream
I0220 00:30:17.268906       9 log.go:172] (0xc00287dad0) (0xc000e6bc20) Stream added, broadcasting: 1
I0220 00:30:17.275430       9 log.go:172] (0xc00287dad0) Reply frame received for 1
I0220 00:30:17.275489       9 log.go:172] (0xc00287dad0) (0xc0001a1cc0) Create stream
I0220 00:30:17.275509       9 log.go:172] (0xc00287dad0) (0xc0001a1cc0) Stream added, broadcasting: 3
I0220 00:30:17.277916       9 log.go:172] (0xc00287dad0) Reply frame received for 3
I0220 00:30:17.278095       9 log.go:172] (0xc00287dad0) (0xc000e6bf40) Create stream
I0220 00:30:17.278132       9 log.go:172] (0xc00287dad0) (0xc000e6bf40) Stream added, broadcasting: 5
I0220 00:30:17.281802       9 log.go:172] (0xc00287dad0) Reply frame received for 5
I0220 00:30:17.340589       9 log.go:172] (0xc00287dad0) Data frame received for 3
I0220 00:30:17.340717       9 log.go:172] (0xc0001a1cc0) (3) Data frame handling
I0220 00:30:17.340741       9 log.go:172] (0xc0001a1cc0) (3) Data frame sent
I0220 00:30:17.417127       9 log.go:172] (0xc00287dad0) Data frame received for 1
I0220 00:30:17.417448       9 log.go:172] (0xc00287dad0) (0xc0001a1cc0) Stream removed, broadcasting: 3
I0220 00:30:17.417561       9 log.go:172] (0xc000e6bc20) (1) Data frame handling
I0220 00:30:17.417738       9 log.go:172] (0xc000e6bc20) (1) Data frame sent
I0220 00:30:17.417758       9 log.go:172] (0xc00287dad0) (0xc000e6bf40) Stream removed, broadcasting: 5
I0220 00:30:17.417842       9 log.go:172] (0xc00287dad0) (0xc000e6bc20) Stream removed, broadcasting: 1
I0220 00:30:17.417890       9 log.go:172] (0xc00287dad0) Go away received
I0220 00:30:17.418044       9 log.go:172] (0xc00287dad0) (0xc000e6bc20) Stream removed, broadcasting: 1
I0220 00:30:17.418058       9 log.go:172] (0xc00287dad0) (0xc0001a1cc0) Stream removed, broadcasting: 3
I0220 00:30:17.418078       9 log.go:172] (0xc00287dad0) (0xc000e6bf40) Stream removed, broadcasting: 5
Feb 20 00:30:17.418: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb 20 00:30:17.418: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1079 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 20 00:30:17.418: INFO: >>> kubeConfig: /root/.kube/config
I0220 00:30:17.457988       9 log.go:172] (0xc002b78a50) (0xc000b3f860) Create stream
I0220 00:30:17.458051       9 log.go:172] (0xc002b78a50) (0xc000b3f860) Stream added, broadcasting: 1
I0220 00:30:17.463589       9 log.go:172] (0xc002b78a50) Reply frame received for 1
I0220 00:30:17.463647       9 log.go:172] (0xc002b78a50) (0xc0006d7e00) Create stream
I0220 00:30:17.463667       9 log.go:172] (0xc002b78a50) (0xc0006d7e00) Stream added, broadcasting: 3
I0220 00:30:17.465643       9 log.go:172] (0xc002b78a50) Reply frame received for 3
I0220 00:30:17.465692       9 log.go:172] (0xc002b78a50) (0xc0010721e0) Create stream
I0220 00:30:17.465706       9 log.go:172] (0xc002b78a50) (0xc0010721e0) Stream added, broadcasting: 5
I0220 00:30:17.467366       9 log.go:172] (0xc002b78a50) Reply frame received for 5
I0220 00:30:17.543709       9 log.go:172] (0xc002b78a50) Data frame received for 3
I0220 00:30:17.543938       9 log.go:172] (0xc0006d7e00) (3) Data frame handling
I0220 00:30:17.543972       9 log.go:172] (0xc0006d7e00) (3) Data frame sent
I0220 00:30:17.615343       9 log.go:172] (0xc002b78a50) (0xc0006d7e00) Stream removed, broadcasting: 3
I0220 00:30:17.615591       9 log.go:172] (0xc002b78a50) Data frame received for 1
I0220 00:30:17.615623       9 log.go:172] (0xc000b3f860) (1) Data frame handling
I0220 00:30:17.615748       9 log.go:172] (0xc000b3f860) (1) Data frame sent
I0220 00:30:17.615792       9 log.go:172] (0xc002b78a50) (0xc000b3f860) Stream removed, broadcasting: 1
I0220 00:30:17.615960       9 log.go:172] (0xc002b78a50) (0xc0010721e0) Stream removed, broadcasting: 5
I0220 00:30:17.616321       9 log.go:172] (0xc002b78a50) Go away received
I0220 00:30:17.616635       9 log.go:172] (0xc002b78a50) (0xc000b3f860) Stream removed, broadcasting: 1
I0220 00:30:17.616734       9 log.go:172] (0xc002b78a50) (0xc0006d7e00) Stream removed, broadcasting: 3
I0220 00:30:17.616758       9 log.go:172] (0xc002b78a50) (0xc0010721e0) Stream removed, broadcasting: 5
Feb 20 00:30:17.616: INFO: Exec stderr: ""
Feb 20 00:30:17.616: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1079 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 20 00:30:17.617: INFO: >>> kubeConfig: /root/.kube/config
I0220 00:30:17.659487       9 log.go:172] (0xc00198e0b0) (0xc000d46f00) Create stream
I0220 00:30:17.659674       9 log.go:172] (0xc00198e0b0) (0xc000d46f00) Stream added, broadcasting: 1
I0220 00:30:17.662919       9 log.go:172] (0xc00198e0b0) Reply frame received for 1
I0220 00:30:17.662999       9 log.go:172] (0xc00198e0b0) (0xc000b3fd60) Create stream
I0220 00:30:17.663022       9 log.go:172] (0xc00198e0b0) (0xc000b3fd60) Stream added, broadcasting: 3
I0220 00:30:17.664433       9 log.go:172] (0xc00198e0b0) Reply frame received for 3
I0220 00:30:17.664628       9 log.go:172] (0xc00198e0b0) (0xc000d470e0) Create stream
I0220 00:30:17.664667       9 log.go:172] (0xc00198e0b0) (0xc000d470e0) Stream added, broadcasting: 5
I0220 00:30:17.666982       9 log.go:172] (0xc00198e0b0) Reply frame received for 5
I0220 00:30:17.723517       9 log.go:172] (0xc00198e0b0) Data frame received for 3
I0220 00:30:17.723625       9 log.go:172] (0xc000b3fd60) (3) Data frame handling
I0220 00:30:17.723662       9 log.go:172] (0xc000b3fd60) (3) Data frame sent
I0220 00:30:17.786808       9 log.go:172] (0xc00198e0b0) (0xc000b3fd60) Stream removed, broadcasting: 3
I0220 00:30:17.787054       9 log.go:172] (0xc00198e0b0) Data frame received for 1
I0220 00:30:17.787074       9 log.go:172] (0xc000d46f00) (1) Data frame handling
I0220 00:30:17.787098       9 log.go:172] (0xc000d46f00) (1) Data frame sent
I0220 00:30:17.787139       9 log.go:172] (0xc00198e0b0) (0xc000d46f00) Stream removed, broadcasting: 1
I0220 00:30:17.787264       9 log.go:172] (0xc00198e0b0) (0xc000d470e0) Stream removed, broadcasting: 5
I0220 00:30:17.787363       9 log.go:172] (0xc00198e0b0) Go away received
I0220 00:30:17.787447       9 log.go:172] (0xc00198e0b0) (0xc000d46f00) Stream removed, broadcasting: 1
I0220 00:30:17.787468       9 log.go:172] (0xc00198e0b0) (0xc000b3fd60) Stream removed, broadcasting: 3
I0220 00:30:17.787487       9 log.go:172] (0xc00198e0b0) (0xc000d470e0) Stream removed, broadcasting: 5
Feb 20 00:30:17.787: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb 20 00:30:17.787: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1079 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 20 00:30:17.787: INFO: >>> kubeConfig: /root/.kube/config
I0220 00:30:17.840585       9 log.go:172] (0xc001a76370) (0xc001072640) Create stream
I0220 00:30:17.840720       9 log.go:172] (0xc001a76370) (0xc001072640) Stream added, broadcasting: 1
I0220 00:30:17.844751       9 log.go:172] (0xc001a76370) Reply frame received for 1
I0220 00:30:17.844847       9 log.go:172] (0xc001a76370) (0xc000b3ff40) Create stream
I0220 00:30:17.844865       9 log.go:172] (0xc001a76370) (0xc000b3ff40) Stream added, broadcasting: 3
I0220 00:30:17.846906       9 log.go:172] (0xc001a76370) Reply frame received for 3
I0220 00:30:17.846944       9 log.go:172] (0xc001a76370) (0xc0013661e0) Create stream
I0220 00:30:17.846953       9 log.go:172] (0xc001a76370) (0xc0013661e0) Stream added, broadcasting: 5
I0220 00:30:17.850745       9 log.go:172] (0xc001a76370) Reply frame received for 5
I0220 00:30:17.937837       9 log.go:172] (0xc001a76370) Data frame received for 3
I0220 00:30:17.938034       9 log.go:172] (0xc000b3ff40) (3) Data frame handling
I0220 00:30:17.938122       9 log.go:172] (0xc000b3ff40) (3) Data frame sent
I0220 00:30:18.022720       9 log.go:172] (0xc001a76370) (0xc000b3ff40) Stream removed, broadcasting: 3
I0220 00:30:18.022877       9 log.go:172] (0xc001a76370) Data frame received for 1
I0220 00:30:18.022918       9 log.go:172] (0xc001072640) (1) Data frame handling
I0220 00:30:18.022928       9 log.go:172] (0xc001072640) (1) Data frame sent
I0220 00:30:18.022939       9 log.go:172] (0xc001a76370) (0xc001072640) Stream removed, broadcasting: 1
I0220 00:30:18.022949       9 log.go:172] (0xc001a76370) (0xc0013661e0) Stream removed, broadcasting: 5
I0220 00:30:18.023013       9 log.go:172] (0xc001a76370) Go away received
I0220 00:30:18.023225       9 log.go:172] (0xc001a76370) (0xc001072640) Stream removed, broadcasting: 1
I0220 00:30:18.023235       9 log.go:172] (0xc001a76370) (0xc000b3ff40) Stream removed, broadcasting: 3
I0220 00:30:18.023258       9 log.go:172] (0xc001a76370) (0xc0013661e0) Stream removed, broadcasting: 5
Feb 20 00:30:18.023: INFO: Exec stderr: ""
Feb 20 00:30:18.023: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1079 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 20 00:30:18.023: INFO: >>> kubeConfig: /root/.kube/config
I0220 00:30:18.064046       9 log.go:172] (0xc002b79080) (0xc000692280) Create stream
I0220 00:30:18.064230       9 log.go:172] (0xc002b79080) (0xc000692280) Stream added, broadcasting: 1
I0220 00:30:18.067192       9 log.go:172] (0xc002b79080) Reply frame received for 1
I0220 00:30:18.067227       9 log.go:172] (0xc002b79080) (0xc000d47180) Create stream
I0220 00:30:18.067237       9 log.go:172] (0xc002b79080) (0xc000d47180) Stream added, broadcasting: 3
I0220 00:30:18.068423       9 log.go:172] (0xc002b79080) Reply frame received for 3
I0220 00:30:18.068446       9 log.go:172] (0xc002b79080) (0xc001072780) Create stream
I0220 00:30:18.068452       9 log.go:172] (0xc002b79080) (0xc001072780) Stream added, broadcasting: 5
I0220 00:30:18.069401       9 log.go:172] (0xc002b79080) Reply frame received for 5
I0220 00:30:18.130094       9 log.go:172] (0xc002b79080) Data frame received for 3
I0220 00:30:18.130384       9 log.go:172] (0xc000d47180) (3) Data frame handling
I0220 00:30:18.130445       9 log.go:172] (0xc000d47180) (3) Data frame sent
I0220 00:30:18.229845       9 log.go:172] (0xc002b79080) (0xc000d47180) Stream removed, broadcasting: 3
I0220 00:30:18.229991       9 log.go:172] (0xc002b79080) Data frame received for 1
I0220 00:30:18.230003       9 log.go:172] (0xc000692280) (1) Data frame handling
I0220 00:30:18.230015       9 log.go:172] (0xc000692280) (1) Data frame sent
I0220 00:30:18.230024       9 log.go:172] (0xc002b79080) (0xc000692280) Stream removed, broadcasting: 1
I0220 00:30:18.230121       9 log.go:172] (0xc002b79080) (0xc001072780) Stream removed, broadcasting: 5
I0220 00:30:18.230152       9 log.go:172] (0xc002b79080) (0xc000692280) Stream removed, broadcasting: 1
I0220 00:30:18.230162       9 log.go:172] (0xc002b79080) (0xc000d47180) Stream removed, broadcasting: 3
I0220 00:30:18.230170       9 log.go:172] (0xc002b79080) (0xc001072780) Stream removed, broadcasting: 5
Feb 20 00:30:18.230: INFO: Exec stderr: ""
Feb 20 00:30:18.230: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1079 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 20 00:30:18.230: INFO: >>> kubeConfig: /root/.kube/config
I0220 00:30:18.234425       9 log.go:172] (0xc002b79080) Go away received
I0220 00:30:18.275401       9 log.go:172] (0xc002b79760) (0xc0006926e0) Create stream
I0220 00:30:18.275498       9 log.go:172] (0xc002b79760) (0xc0006926e0) Stream added, broadcasting: 1
I0220 00:30:18.277940       9 log.go:172] (0xc002b79760) Reply frame received for 1
I0220 00:30:18.277970       9 log.go:172] (0xc002b79760) (0xc000b86000) Create stream
I0220 00:30:18.277980       9 log.go:172] (0xc002b79760) (0xc000b86000) Stream added, broadcasting: 3
I0220 00:30:18.279045       9 log.go:172] (0xc002b79760) Reply frame received for 3
I0220 00:30:18.279070       9 log.go:172] (0xc002b79760) (0xc000d47540) Create stream
I0220 00:30:18.279079       9 log.go:172] (0xc002b79760) (0xc000d47540) Stream added, broadcasting: 5
I0220 00:30:18.279995       9 log.go:172] (0xc002b79760) Reply frame received for 5
I0220 00:30:18.352860       9 log.go:172] (0xc002b79760) Data frame received for 3
I0220 00:30:18.352981       9 log.go:172] (0xc000b86000) (3) Data frame handling
I0220 00:30:18.353018       9 log.go:172] (0xc000b86000) (3) Data frame sent
I0220 00:30:18.436976       9 log.go:172] (0xc002b79760) Data frame received for 1
I0220 00:30:18.437197       9 log.go:172] (0xc002b79760) (0xc000b86000) Stream removed, broadcasting: 3
I0220 00:30:18.437274       9 log.go:172] (0xc0006926e0) (1) Data frame handling
I0220 00:30:18.437302       9 log.go:172] (0xc0006926e0) (1) Data frame sent
I0220 00:30:18.437329       9 log.go:172] (0xc002b79760) (0xc000d47540) Stream removed, broadcasting: 5
I0220 00:30:18.437407       9 log.go:172] (0xc002b79760) (0xc0006926e0) Stream removed, broadcasting: 1
I0220 00:30:18.437440       9 log.go:172] (0xc002b79760) Go away received
I0220 00:30:18.437947       9 log.go:172] (0xc002b79760) (0xc0006926e0) Stream removed, broadcasting: 1
I0220 00:30:18.437970       9 log.go:172] (0xc002b79760) (0xc000b86000) Stream removed, broadcasting: 3
I0220 00:30:18.437993       9 log.go:172] (0xc002b79760) (0xc000d47540) Stream removed, broadcasting: 5
Feb 20 00:30:18.438: INFO: Exec stderr: ""
Feb 20 00:30:18.438: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1079 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 20 00:30:18.438: INFO: >>> kubeConfig: /root/.kube/config
I0220 00:30:18.484443       9 log.go:172] (0xc002b79d90) (0xc000692d20) Create stream
I0220 00:30:18.484690       9 log.go:172] (0xc002b79d90) (0xc000692d20) Stream added, broadcasting: 1
I0220 00:30:18.488707       9 log.go:172] (0xc002b79d90) Reply frame received for 1
I0220 00:30:18.488763       9 log.go:172] (0xc002b79d90) (0xc000692e60) Create stream
I0220 00:30:18.488771       9 log.go:172] (0xc002b79d90) (0xc000692e60) Stream added, broadcasting: 3
I0220 00:30:18.490637       9 log.go:172] (0xc002b79d90) Reply frame received for 3
I0220 00:30:18.490708       9 log.go:172] (0xc002b79d90) (0xc000d47680) Create stream
I0220 00:30:18.490725       9 log.go:172] (0xc002b79d90) (0xc000d47680) Stream added, broadcasting: 5
I0220 00:30:18.491882       9 log.go:172] (0xc002b79d90) Reply frame received for 5
I0220 00:30:18.578315       9 log.go:172] (0xc002b79d90) Data frame received for 3
I0220 00:30:18.578575       9 log.go:172] (0xc000692e60) (3) Data frame handling
I0220 00:30:18.578605       9 log.go:172] (0xc000692e60) (3) Data frame sent
I0220 00:30:18.652413       9 log.go:172] (0xc002b79d90) Data frame received for 1
I0220 00:30:18.652508       9 log.go:172] (0xc002b79d90) (0xc000692e60) Stream removed, broadcasting: 3
I0220 00:30:18.652573       9 log.go:172] (0xc000692d20) (1) Data frame handling
I0220 00:30:18.652593       9 log.go:172] (0xc000692d20) (1) Data frame sent
I0220 00:30:18.652613       9 log.go:172] (0xc002b79d90) (0xc000d47680) Stream removed, broadcasting: 5
I0220 00:30:18.652651       9 log.go:172] (0xc002b79d90) (0xc000692d20) Stream removed, broadcasting: 1
I0220 00:30:18.652669       9 log.go:172] (0xc002b79d90) Go away received
I0220 00:30:18.652792       9 log.go:172] (0xc002b79d90) (0xc000692d20) Stream removed, broadcasting: 1
I0220 00:30:18.652807       9 log.go:172] (0xc002b79d90) (0xc000692e60) Stream removed, broadcasting: 3
I0220 00:30:18.652817       9 log.go:172] (0xc002b79d90) (0xc000d47680) Stream removed, broadcasting: 5
Feb 20 00:30:18.652: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:30:18.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-1079" for this suite.

• [SLOW TEST:26.392 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":121,"skipped":1818,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:30:18.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Feb 20 00:30:31.306: INFO: Successfully updated pod "adopt-release-dct6v"
STEP: Checking that the Job readopts the Pod
Feb 20 00:30:31.306: INFO: Waiting up to 15m0s for pod "adopt-release-dct6v" in namespace "job-5910" to be "adopted"
Feb 20 00:30:31.324: INFO: Pod "adopt-release-dct6v": Phase="Running", Reason="", readiness=true. Elapsed: 17.970249ms
Feb 20 00:30:33.332: INFO: Pod "adopt-release-dct6v": Phase="Running", Reason="", readiness=true. Elapsed: 2.026255907s
Feb 20 00:30:33.332: INFO: Pod "adopt-release-dct6v" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Feb 20 00:30:33.859: INFO: Successfully updated pod "adopt-release-dct6v"
STEP: Checking that the Job releases the Pod
Feb 20 00:30:33.860: INFO: Waiting up to 15m0s for pod "adopt-release-dct6v" in namespace "job-5910" to be "released"
Feb 20 00:30:33.901: INFO: Pod "adopt-release-dct6v": Phase="Running", Reason="", readiness=true. Elapsed: 40.759012ms
Feb 20 00:30:33.901: INFO: Pod "adopt-release-dct6v" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:30:33.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5910" for this suite.

• [SLOW TEST:15.411 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":280,"completed":122,"skipped":1820,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:30:34.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod busybox-e91bfbd5-bb86-4fab-864a-63a678fc55e8 in namespace container-probe-4437
Feb 20 00:30:48.204: INFO: Started pod busybox-e91bfbd5-bb86-4fab-864a-63a678fc55e8 in namespace container-probe-4437
STEP: checking the pod's current state and verifying that restartCount is present
Feb 20 00:30:48.206: INFO: Initial restart count of pod busybox-e91bfbd5-bb86-4fab-864a-63a678fc55e8 is 0
Feb 20 00:31:40.810: INFO: Restart count of pod container-probe-4437/busybox-e91bfbd5-bb86-4fab-864a-63a678fc55e8 is now 1 (52.604258086s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:31:40.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4437" for this suite.

• [SLOW TEST:66.849 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":280,"completed":123,"skipped":1826,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:31:40.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: getting the auto-created API token
Feb 20 00:31:41.526: INFO: created pod pod-service-account-defaultsa
Feb 20 00:31:41.526: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb 20 00:31:41.544: INFO: created pod pod-service-account-mountsa
Feb 20 00:31:41.545: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb 20 00:31:41.627: INFO: created pod pod-service-account-nomountsa
Feb 20 00:31:41.627: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb 20 00:31:41.674: INFO: created pod pod-service-account-defaultsa-mountspec
Feb 20 00:31:41.674: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb 20 00:31:41.875: INFO: created pod pod-service-account-mountsa-mountspec
Feb 20 00:31:41.875: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb 20 00:31:41.913: INFO: created pod pod-service-account-nomountsa-mountspec
Feb 20 00:31:41.914: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb 20 00:31:41.946: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb 20 00:31:41.946: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb 20 00:31:42.115: INFO: created pod pod-service-account-mountsa-nomountspec
Feb 20 00:31:42.115: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb 20 00:31:42.195: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb 20 00:31:42.195: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:31:42.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-31" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":280,"completed":124,"skipped":1854,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:31:43.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 20 00:31:46.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Feb 20 00:31:51.116: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-20T00:31:50Z generation:1 name:name1 resourceVersion:9503560 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:0188a31c-f445-46a8-b9f1-f8e8b174de4c] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Feb 20 00:32:01.221: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-20T00:32:01Z generation:1 name:name2 resourceVersion:9503622 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:2f1ea01d-853b-44d5-81f8-d7ee0abbc9dd] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Feb 20 00:32:11.232: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-20T00:31:50Z generation:2 name:name1 resourceVersion:9503665 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:0188a31c-f445-46a8-b9f1-f8e8b174de4c] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Feb 20 00:32:21.241: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-20T00:32:01Z generation:2 name:name2 resourceVersion:9503690 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:2f1ea01d-853b-44d5-81f8-d7ee0abbc9dd] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Feb 20 00:32:31.254: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-20T00:31:50Z generation:2 name:name1 resourceVersion:9503718 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:0188a31c-f445-46a8-b9f1-f8e8b174de4c] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Feb 20 00:32:41.268: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-20T00:32:01Z generation:2 name:name2 resourceVersion:9503742 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:2f1ea01d-853b-44d5-81f8-d7ee0abbc9dd] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:32:51.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-328" for this suite.

• [SLOW TEST:68.299 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":280,"completed":125,"skipped":1855,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:32:51.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-2517
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a new StatefulSet
Feb 20 00:32:52.112: INFO: Found 0 stateful pods, waiting for 3
Feb 20 00:33:02.120: INFO: Found 2 stateful pods, waiting for 3
Feb 20 00:33:12.122: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 20 00:33:12.122: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 20 00:33:12.122: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 20 00:33:22.123: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 20 00:33:22.123: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 20 00:33:22.123: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb 20 00:33:22.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2517 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 20 00:33:24.988: INFO: stderr: "I0220 00:33:24.763418    2043 log.go:172] (0xc000105970) (0xc000691cc0) Create stream\nI0220 00:33:24.763510    2043 log.go:172] (0xc000105970) (0xc000691cc0) Stream added, broadcasting: 1\nI0220 00:33:24.768679    2043 log.go:172] (0xc000105970) Reply frame received for 1\nI0220 00:33:24.768767    2043 log.go:172] (0xc000105970) (0xc0008be0a0) Create stream\nI0220 00:33:24.768796    2043 log.go:172] (0xc000105970) (0xc0008be0a0) Stream added, broadcasting: 3\nI0220 00:33:24.769932    2043 log.go:172] (0xc000105970) Reply frame received for 3\nI0220 00:33:24.769974    2043 log.go:172] (0xc000105970) (0xc000760000) Create stream\nI0220 00:33:24.769992    2043 log.go:172] (0xc000105970) (0xc000760000) Stream added, broadcasting: 5\nI0220 00:33:24.771139    2043 log.go:172] (0xc000105970) Reply frame received for 5\nI0220 00:33:24.855181    2043 log.go:172] (0xc000105970) Data frame received for 5\nI0220 00:33:24.855607    2043 log.go:172] (0xc000760000) (5) Data frame handling\nI0220 00:33:24.855685    2043 log.go:172] (0xc000760000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0220 00:33:24.893858    2043 log.go:172] (0xc000105970) Data frame received for 3\nI0220 00:33:24.893904    2043 log.go:172] (0xc0008be0a0) (3) Data frame handling\nI0220 00:33:24.893948    2043 log.go:172] (0xc0008be0a0) (3) Data frame sent\nI0220 00:33:24.974651    2043 log.go:172] (0xc000105970) (0xc0008be0a0) Stream removed, broadcasting: 3\nI0220 00:33:24.974921    2043 log.go:172] (0xc000105970) Data frame received for 1\nI0220 00:33:24.975114    2043 log.go:172] (0xc000105970) (0xc000760000) Stream removed, broadcasting: 5\nI0220 00:33:24.975206    2043 log.go:172] (0xc000691cc0) (1) Data frame handling\nI0220 00:33:24.975266    2043 log.go:172] (0xc000691cc0) (1) Data frame sent\nI0220 00:33:24.975283    2043 log.go:172] (0xc000105970) (0xc000691cc0) Stream removed, broadcasting: 1\nI0220 00:33:24.975335    2043 log.go:172] (0xc000105970) Go away received\nI0220 00:33:24.976231    2043 log.go:172] (0xc000105970) (0xc000691cc0) Stream removed, broadcasting: 1\nI0220 00:33:24.976248    2043 log.go:172] (0xc000105970) (0xc0008be0a0) Stream removed, broadcasting: 3\nI0220 00:33:24.976254    2043 log.go:172] (0xc000105970) (0xc000760000) Stream removed, broadcasting: 5\n"
Feb 20 00:33:24.989: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 20 00:33:24.989: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Feb 20 00:33:35.038: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb 20 00:33:46.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2517 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 00:33:46.382: INFO: stderr: "I0220 00:33:46.213991    2070 log.go:172] (0xc000c5d130) (0xc000cf05a0) Create stream\nI0220 00:33:46.214150    2070 log.go:172] (0xc000c5d130) (0xc000cf05a0) Stream added, broadcasting: 1\nI0220 00:33:46.220046    2070 log.go:172] (0xc000c5d130) Reply frame received for 1\nI0220 00:33:46.220383    2070 log.go:172] (0xc000c5d130) (0xc000a48280) Create stream\nI0220 00:33:46.220417    2070 log.go:172] (0xc000c5d130) (0xc000a48280) Stream added, broadcasting: 3\nI0220 00:33:46.226525    2070 log.go:172] (0xc000c5d130) Reply frame received for 3\nI0220 00:33:46.226932    2070 log.go:172] (0xc000c5d130) (0xc000a10280) Create stream\nI0220 00:33:46.227080    2070 log.go:172] (0xc000c5d130) (0xc000a10280) Stream added, broadcasting: 5\nI0220 00:33:46.230711    2070 log.go:172] (0xc000c5d130) Reply frame received for 5\nI0220 00:33:46.292227    2070 log.go:172] (0xc000c5d130) Data frame received for 3\nI0220 00:33:46.292273    2070 log.go:172] (0xc000a48280) (3) Data frame handling\nI0220 00:33:46.292292    2070 log.go:172] (0xc000a48280) (3) Data frame sent\nI0220 00:33:46.292385    2070 log.go:172] (0xc000c5d130) Data frame received for 5\nI0220 00:33:46.292407    2070 log.go:172] (0xc000a10280) (5) Data frame handling\nI0220 00:33:46.292423    2070 log.go:172] (0xc000a10280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0220 00:33:46.373134    2070 log.go:172] (0xc000c5d130) Data frame received for 1\nI0220 00:33:46.373239    2070 log.go:172] (0xc000c5d130) (0xc000a48280) Stream removed, broadcasting: 3\nI0220 00:33:46.373373    2070 log.go:172] (0xc000cf05a0) (1) Data frame handling\nI0220 00:33:46.373409    2070 log.go:172] (0xc000cf05a0) (1) Data frame sent\nI0220 00:33:46.373439    2070 log.go:172] (0xc000c5d130) (0xc000a10280) Stream removed, broadcasting: 5\nI0220 00:33:46.373484    2070 log.go:172] (0xc000c5d130) (0xc000cf05a0) Stream removed, broadcasting: 1\nI0220 00:33:46.373501    2070 log.go:172] (0xc000c5d130) Go away received\nI0220 00:33:46.374319    2070 log.go:172] (0xc000c5d130) (0xc000cf05a0) Stream removed, broadcasting: 1\nI0220 00:33:46.374345    2070 log.go:172] (0xc000c5d130) (0xc000a48280) Stream removed, broadcasting: 3\nI0220 00:33:46.374357    2070 log.go:172] (0xc000c5d130) (0xc000a10280) Stream removed, broadcasting: 5\n"
Feb 20 00:33:46.382: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 20 00:33:46.382: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 20 00:33:56.404: INFO: Waiting for StatefulSet statefulset-2517/ss2 to complete update
Feb 20 00:33:56.404: INFO: Waiting for Pod statefulset-2517/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 20 00:33:56.404: INFO: Waiting for Pod statefulset-2517/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 20 00:34:06.464: INFO: Waiting for StatefulSet statefulset-2517/ss2 to complete update
Feb 20 00:34:06.465: INFO: Waiting for Pod statefulset-2517/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 20 00:34:16.448: INFO: Waiting for StatefulSet statefulset-2517/ss2 to complete update
Feb 20 00:34:16.448: INFO: Waiting for Pod statefulset-2517/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 20 00:34:26.439: INFO: Waiting for StatefulSet statefulset-2517/ss2 to complete update
STEP: Rolling back to a previous revision
Feb 20 00:34:36.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2517 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 20 00:34:36.861: INFO: stderr: "I0220 00:34:36.627623    2093 log.go:172] (0xc000ab20b0) (0xc0006b5d60) Create stream\nI0220 00:34:36.627812    2093 log.go:172] (0xc000ab20b0) (0xc0006b5d60) Stream added, broadcasting: 1\nI0220 00:34:36.630941    2093 log.go:172] (0xc000ab20b0) Reply frame received for 1\nI0220 00:34:36.630996    2093 log.go:172] (0xc000ab20b0) (0xc0006b5e00) Create stream\nI0220 00:34:36.631004    2093 log.go:172] (0xc000ab20b0) (0xc0006b5e00) Stream added, broadcasting: 3\nI0220 00:34:36.632065    2093 log.go:172] (0xc000ab20b0) Reply frame received for 3\nI0220 00:34:36.632091    2093 log.go:172] (0xc000ab20b0) (0xc000670820) Create stream\nI0220 00:34:36.632100    2093 log.go:172] (0xc000ab20b0) (0xc000670820) Stream added, broadcasting: 5\nI0220 00:34:36.635248    2093 log.go:172] (0xc000ab20b0) Reply frame received for 5\nI0220 00:34:36.698851    2093 log.go:172] (0xc000ab20b0) Data frame received for 5\nI0220 00:34:36.698865    2093 log.go:172] (0xc000670820) (5) Data frame handling\nI0220 00:34:36.698885    2093 log.go:172] (0xc000670820) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0220 00:34:36.755101    2093 log.go:172] (0xc000ab20b0) Data frame received for 3\nI0220 00:34:36.755129    2093 log.go:172] (0xc0006b5e00) (3) Data frame handling\nI0220 00:34:36.755155    2093 log.go:172] (0xc0006b5e00) (3) Data frame sent\nI0220 00:34:36.842694    2093 log.go:172] (0xc000ab20b0) Data frame received for 1\nI0220 00:34:36.842954    2093 log.go:172] (0xc000ab20b0) (0xc000670820) Stream removed, broadcasting: 5\nI0220 00:34:36.843092    2093 log.go:172] (0xc0006b5d60) (1) Data frame handling\nI0220 00:34:36.843197    2093 log.go:172] (0xc000ab20b0) (0xc0006b5e00) Stream removed, broadcasting: 3\nI0220 00:34:36.843320    2093 log.go:172] (0xc0006b5d60) (1) Data frame sent\nI0220 00:34:36.843364    2093 log.go:172] (0xc000ab20b0) (0xc0006b5d60) Stream removed, broadcasting: 1\nI0220 00:34:36.843391    2093 log.go:172] (0xc000ab20b0) Go away received\nI0220 00:34:36.845315    2093 log.go:172] (0xc000ab20b0) (0xc0006b5d60) Stream removed, broadcasting: 1\nI0220 00:34:36.845370    2093 log.go:172] (0xc000ab20b0) (0xc0006b5e00) Stream removed, broadcasting: 3\nI0220 00:34:36.845384    2093 log.go:172] (0xc000ab20b0) (0xc000670820) Stream removed, broadcasting: 5\n"
Feb 20 00:34:36.861: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 20 00:34:36.861: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 20 00:34:46.947: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb 20 00:34:56.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2517 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 00:34:57.392: INFO: stderr: "I0220 00:34:57.228028    2113 log.go:172] (0xc000aa66e0) (0xc00071d5e0) Create stream\nI0220 00:34:57.228135    2113 log.go:172] (0xc000aa66e0) (0xc00071d5e0) Stream added, broadcasting: 1\nI0220 00:34:57.231366    2113 log.go:172] (0xc000aa66e0) Reply frame received for 1\nI0220 00:34:57.231428    2113 log.go:172] (0xc000aa66e0) (0xc0008ea000) Create stream\nI0220 00:34:57.231443    2113 log.go:172] (0xc000aa66e0) (0xc0008ea000) Stream added, broadcasting: 3\nI0220 00:34:57.232341    2113 log.go:172] (0xc000aa66e0) Reply frame received for 3\nI0220 00:34:57.232365    2113 log.go:172] (0xc000aa66e0) (0xc000b2e000) Create stream\nI0220 00:34:57.232376    2113 log.go:172] (0xc000aa66e0) (0xc000b2e000) Stream added, broadcasting: 5\nI0220 00:34:57.233400    2113 log.go:172] (0xc000aa66e0) Reply frame received for 5\nI0220 00:34:57.296487    2113 log.go:172] (0xc000aa66e0) Data frame received for 5\nI0220 00:34:57.296543    2113 log.go:172] (0xc000b2e000) (5) Data frame handling\nI0220 00:34:57.296573    2113 log.go:172] (0xc000b2e000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0220 00:34:57.296595    2113 log.go:172] (0xc000aa66e0) Data frame received for 3\nI0220 00:34:57.296610    2113 log.go:172] (0xc0008ea000) (3) Data frame handling\nI0220 00:34:57.296632    2113 log.go:172] (0xc0008ea000) (3) Data frame sent\nI0220 00:34:57.375776    2113 log.go:172] (0xc000aa66e0) Data frame received for 1\nI0220 00:34:57.375911    2113 log.go:172] (0xc000aa66e0) (0xc0008ea000) Stream removed, broadcasting: 3\nI0220 00:34:57.375994    2113 log.go:172] (0xc00071d5e0) (1) Data frame handling\nI0220 00:34:57.376064    2113 log.go:172] (0xc000aa66e0) (0xc000b2e000) Stream removed, broadcasting: 5\nI0220 00:34:57.376136    2113 log.go:172] (0xc00071d5e0) (1) Data frame sent\nI0220 00:34:57.376152    2113 log.go:172] (0xc000aa66e0) (0xc00071d5e0) Stream removed, broadcasting: 1\nI0220 00:34:57.376179    2113 log.go:172] (0xc000aa66e0) Go away received\nI0220 00:34:57.377077    2113 log.go:172] (0xc000aa66e0) (0xc00071d5e0) Stream removed, broadcasting: 1\nI0220 00:34:57.377095    2113 log.go:172] (0xc000aa66e0) (0xc0008ea000) Stream removed, broadcasting: 3\nI0220 00:34:57.377104    2113 log.go:172] (0xc000aa66e0) (0xc000b2e000) Stream removed, broadcasting: 5\n"
Feb 20 00:34:57.392: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 20 00:34:57.392: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 20 00:35:07.421: INFO: Waiting for StatefulSet statefulset-2517/ss2 to complete update
Feb 20 00:35:07.421: INFO: Waiting for Pod statefulset-2517/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 20 00:35:07.421: INFO: Waiting for Pod statefulset-2517/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 20 00:35:07.421: INFO: Waiting for Pod statefulset-2517/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 20 00:35:17.457: INFO: Waiting for StatefulSet statefulset-2517/ss2 to complete update
Feb 20 00:35:17.457: INFO: Waiting for Pod statefulset-2517/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 20 00:35:17.457: INFO: Waiting for Pod statefulset-2517/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 20 00:35:27.505: INFO: Waiting for StatefulSet statefulset-2517/ss2 to complete update
Feb 20 00:35:27.505: INFO: Waiting for Pod statefulset-2517/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 20 00:35:37.439: INFO: Waiting for StatefulSet statefulset-2517/ss2 to complete update
Feb 20 00:35:37.439: INFO: Waiting for Pod statefulset-2517/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 20 00:35:47.435: INFO: Waiting for StatefulSet statefulset-2517/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Feb 20 00:35:57.433: INFO: Deleting all statefulset in ns statefulset-2517
Feb 20 00:35:57.437: INFO: Scaling statefulset ss2 to 0
Feb 20 00:36:37.466: INFO: Waiting for statefulset status.replicas updated to 0
Feb 20 00:36:37.471: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:36:37.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2517" for this suite.

• [SLOW TEST:225.738 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":280,"completed":126,"skipped":1870,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:36:37.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 20 00:36:37.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Feb 20 00:36:40.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9955 create -f -'
Feb 20 00:36:43.798: INFO: stderr: ""
Feb 20 00:36:43.798: INFO: stdout: "e2e-test-crd-publish-openapi-9706-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Feb 20 00:36:43.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9955 delete e2e-test-crd-publish-openapi-9706-crds test-cr'
Feb 20 00:36:44.015: INFO: stderr: ""
Feb 20 00:36:44.016: INFO: stdout: "e2e-test-crd-publish-openapi-9706-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Feb 20 00:36:44.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9955 apply -f -'
Feb 20 00:36:44.558: INFO: stderr: ""
Feb 20 00:36:44.558: INFO: stdout: "e2e-test-crd-publish-openapi-9706-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Feb 20 00:36:44.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9955 delete e2e-test-crd-publish-openapi-9706-crds test-cr'
Feb 20 00:36:44.678: INFO: stderr: ""
Feb 20 00:36:44.679: INFO: stdout: "e2e-test-crd-publish-openapi-9706-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Feb 20 00:36:44.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9706-crds'
Feb 20 00:36:44.963: INFO: stderr: ""
Feb 20 00:36:44.964: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9706-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:36:48.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9955" for this suite.

• [SLOW TEST:11.002 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":280,"completed":127,"skipped":1903,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:36:48.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 20 00:36:48.626: INFO: Waiting up to 5m0s for pod "downwardapi-volume-77b5323f-df3b-4456-b327-8026d62a6b5a" in namespace "projected-2370" to be "success or failure"
Feb 20 00:36:48.631: INFO: Pod "downwardapi-volume-77b5323f-df3b-4456-b327-8026d62a6b5a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.53709ms
Feb 20 00:36:50.639: INFO: Pod "downwardapi-volume-77b5323f-df3b-4456-b327-8026d62a6b5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013337293s
Feb 20 00:36:52.643: INFO: Pod "downwardapi-volume-77b5323f-df3b-4456-b327-8026d62a6b5a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017664212s
Feb 20 00:36:54.648: INFO: Pod "downwardapi-volume-77b5323f-df3b-4456-b327-8026d62a6b5a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.022643665s
Feb 20 00:36:56.660: INFO: Pod "downwardapi-volume-77b5323f-df3b-4456-b327-8026d62a6b5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.034055117s
STEP: Saw pod success
Feb 20 00:36:56.660: INFO: Pod "downwardapi-volume-77b5323f-df3b-4456-b327-8026d62a6b5a" satisfied condition "success or failure"
Feb 20 00:36:56.667: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-77b5323f-df3b-4456-b327-8026d62a6b5a container client-container: 
STEP: delete the pod
Feb 20 00:36:56.750: INFO: Waiting for pod downwardapi-volume-77b5323f-df3b-4456-b327-8026d62a6b5a to disappear
Feb 20 00:36:56.754: INFO: Pod downwardapi-volume-77b5323f-df3b-4456-b327-8026d62a6b5a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:36:56.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2370" for this suite.

• [SLOW TEST:8.223 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":280,"completed":128,"skipped":1918,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:36:56.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Feb 20 00:36:56.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Feb 20 00:37:09.483: INFO: >>> kubeConfig: /root/.kube/config
Feb 20 00:37:12.461: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:37:23.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9589" for this suite.

• [SLOW TEST:27.189 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":280,"completed":129,"skipped":1920,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:37:23.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 20 00:37:24.713: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 20 00:37:26.734: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755844, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755844, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755844, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755844, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:37:28.748: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755844, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755844, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755844, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755844, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:37:30.739: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755844, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755844, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755844, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755844, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 20 00:37:33.767: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:37:44.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8195" for this suite.
STEP: Destroying namespace "webhook-8195-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:20.261 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":280,"completed":130,"skipped":1920,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:37:44.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:37:53.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-722" for this suite.

• [SLOW TEST:9.210 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":280,"completed":131,"skipped":1968,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:37:53.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Feb 20 00:37:54.074: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Feb 20 00:37:56.085: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755874, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755874, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755874, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755874, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:37:58.653: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755874, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755874, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755874, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755874, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:38:00.093: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755874, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755874, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755874, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755874, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:38:02.098: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755874, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755874, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755874, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717755874, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 20 00:38:05.116: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 20 00:38:05.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:38:06.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-5926" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:13.157 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":280,"completed":132,"skipped":2017,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:38:06.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 20 00:38:06.791: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb 20 00:38:06.809: INFO: Number of nodes with available pods: 0
Feb 20 00:38:06.809: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:38:08.294: INFO: Number of nodes with available pods: 0
Feb 20 00:38:08.294: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:38:08.827: INFO: Number of nodes with available pods: 0
Feb 20 00:38:08.827: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:38:09.834: INFO: Number of nodes with available pods: 0
Feb 20 00:38:09.835: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:38:10.834: INFO: Number of nodes with available pods: 0
Feb 20 00:38:10.834: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:38:12.936: INFO: Number of nodes with available pods: 0
Feb 20 00:38:12.937: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:38:15.220: INFO: Number of nodes with available pods: 0
Feb 20 00:38:15.220: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:38:15.921: INFO: Number of nodes with available pods: 0
Feb 20 00:38:15.921: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:38:16.827: INFO: Number of nodes with available pods: 0
Feb 20 00:38:16.827: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:38:17.826: INFO: Number of nodes with available pods: 2
Feb 20 00:38:17.826: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb 20 00:38:17.881: INFO: Wrong image for pod: daemon-set-crhtf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:17.881: INFO: Wrong image for pod: daemon-set-qlltx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:18.914: INFO: Wrong image for pod: daemon-set-crhtf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:18.915: INFO: Wrong image for pod: daemon-set-qlltx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:19.934: INFO: Wrong image for pod: daemon-set-crhtf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:19.934: INFO: Wrong image for pod: daemon-set-qlltx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:20.905: INFO: Wrong image for pod: daemon-set-crhtf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:20.905: INFO: Wrong image for pod: daemon-set-qlltx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:22.018: INFO: Wrong image for pod: daemon-set-crhtf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:22.018: INFO: Wrong image for pod: daemon-set-qlltx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:22.900: INFO: Wrong image for pod: daemon-set-crhtf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:22.900: INFO: Wrong image for pod: daemon-set-qlltx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:23.898: INFO: Wrong image for pod: daemon-set-crhtf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:23.898: INFO: Wrong image for pod: daemon-set-qlltx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:23.898: INFO: Pod daemon-set-qlltx is not available
Feb 20 00:38:24.898: INFO: Wrong image for pod: daemon-set-crhtf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:24.898: INFO: Wrong image for pod: daemon-set-qlltx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:24.898: INFO: Pod daemon-set-qlltx is not available
Feb 20 00:38:25.897: INFO: Wrong image for pod: daemon-set-crhtf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:25.897: INFO: Wrong image for pod: daemon-set-qlltx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:25.897: INFO: Pod daemon-set-qlltx is not available
Feb 20 00:38:26.900: INFO: Wrong image for pod: daemon-set-crhtf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:26.901: INFO: Wrong image for pod: daemon-set-qlltx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:26.901: INFO: Pod daemon-set-qlltx is not available
Feb 20 00:38:27.895: INFO: Wrong image for pod: daemon-set-crhtf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:27.895: INFO: Wrong image for pod: daemon-set-qlltx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:27.895: INFO: Pod daemon-set-qlltx is not available
Feb 20 00:38:28.897: INFO: Wrong image for pod: daemon-set-crhtf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:28.897: INFO: Wrong image for pod: daemon-set-qlltx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:28.897: INFO: Pod daemon-set-qlltx is not available
Feb 20 00:38:29.899: INFO: Wrong image for pod: daemon-set-crhtf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:29.900: INFO: Wrong image for pod: daemon-set-qlltx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:29.900: INFO: Pod daemon-set-qlltx is not available
Feb 20 00:38:30.898: INFO: Wrong image for pod: daemon-set-crhtf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:30.898: INFO: Wrong image for pod: daemon-set-qlltx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:30.898: INFO: Pod daemon-set-qlltx is not available
Feb 20 00:38:31.900: INFO: Wrong image for pod: daemon-set-crhtf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:31.901: INFO: Wrong image for pod: daemon-set-qlltx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:31.901: INFO: Pod daemon-set-qlltx is not available
Feb 20 00:38:32.897: INFO: Wrong image for pod: daemon-set-crhtf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:32.897: INFO: Wrong image for pod: daemon-set-qlltx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:32.897: INFO: Pod daemon-set-qlltx is not available
Feb 20 00:38:33.909: INFO: Pod daemon-set-8pm5j is not available
Feb 20 00:38:33.909: INFO: Wrong image for pod: daemon-set-crhtf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:34.901: INFO: Pod daemon-set-8pm5j is not available
Feb 20 00:38:34.902: INFO: Wrong image for pod: daemon-set-crhtf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:35.900: INFO: Pod daemon-set-8pm5j is not available
Feb 20 00:38:35.900: INFO: Wrong image for pod: daemon-set-crhtf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:37.755: INFO: Pod daemon-set-8pm5j is not available
Feb 20 00:38:37.755: INFO: Wrong image for pod: daemon-set-crhtf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:38.059: INFO: Pod daemon-set-8pm5j is not available
Feb 20 00:38:38.059: INFO: Wrong image for pod: daemon-set-crhtf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:38.898: INFO: Pod daemon-set-8pm5j is not available
Feb 20 00:38:38.898: INFO: Wrong image for pod: daemon-set-crhtf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:39.902: INFO: Wrong image for pod: daemon-set-crhtf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:40.935: INFO: Wrong image for pod: daemon-set-crhtf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:41.896: INFO: Wrong image for pod: daemon-set-crhtf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:42.896: INFO: Wrong image for pod: daemon-set-crhtf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:43.907: INFO: Wrong image for pod: daemon-set-crhtf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 20 00:38:43.907: INFO: Pod daemon-set-crhtf is not available
Feb 20 00:38:44.896: INFO: Pod daemon-set-thvb8 is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb 20 00:38:44.910: INFO: Number of nodes with available pods: 1
Feb 20 00:38:44.910: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:38:45.928: INFO: Number of nodes with available pods: 1
Feb 20 00:38:45.929: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:38:46.968: INFO: Number of nodes with available pods: 1
Feb 20 00:38:46.968: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:38:47.927: INFO: Number of nodes with available pods: 1
Feb 20 00:38:47.928: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:38:48.948: INFO: Number of nodes with available pods: 1
Feb 20 00:38:48.948: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:38:49.932: INFO: Number of nodes with available pods: 1
Feb 20 00:38:49.932: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:38:51.018: INFO: Number of nodes with available pods: 1
Feb 20 00:38:51.018: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:38:51.925: INFO: Number of nodes with available pods: 2
Feb 20 00:38:51.925: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8637, will wait for the garbage collector to delete the pods
Feb 20 00:38:52.016: INFO: Deleting DaemonSet.extensions daemon-set took: 10.719805ms
Feb 20 00:38:52.316: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.519974ms
Feb 20 00:39:03.143: INFO: Number of nodes with available pods: 0
Feb 20 00:39:03.143: INFO: Number of running nodes: 0, number of available pods: 0
Feb 20 00:39:03.146: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8637/daemonsets","resourceVersion":"9505314"},"items":null}

Feb 20 00:39:03.149: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8637/pods","resourceVersion":"9505314"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:39:03.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8637" for this suite.

• [SLOW TEST:56.565 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":280,"completed":133,"skipped":2019,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:39:03.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating server pod server in namespace prestop-2857
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-2857
STEP: Deleting pre-stop pod
Feb 20 00:39:26.367: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:39:26.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-2857" for this suite.

• [SLOW TEST:23.327 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":280,"completed":134,"skipped":2027,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:39:26.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:39:26.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1720" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":280,"completed":135,"skipped":2031,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:39:26.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:39:38.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1832" for this suite.

• [SLOW TEST:11.231 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":280,"completed":136,"skipped":2032,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:39:38.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 20 00:39:38.233: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb 20 00:39:38.252: INFO: Number of nodes with available pods: 0
Feb 20 00:39:38.252: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb 20 00:39:38.377: INFO: Number of nodes with available pods: 0
Feb 20 00:39:38.377: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:39:39.393: INFO: Number of nodes with available pods: 0
Feb 20 00:39:39.393: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:39:40.384: INFO: Number of nodes with available pods: 0
Feb 20 00:39:40.384: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:39:41.385: INFO: Number of nodes with available pods: 0
Feb 20 00:39:41.385: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:39:42.384: INFO: Number of nodes with available pods: 0
Feb 20 00:39:42.384: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:39:43.392: INFO: Number of nodes with available pods: 0
Feb 20 00:39:43.392: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:39:44.392: INFO: Number of nodes with available pods: 0
Feb 20 00:39:44.392: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:39:45.384: INFO: Number of nodes with available pods: 1
Feb 20 00:39:45.384: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb 20 00:39:45.429: INFO: Number of nodes with available pods: 1
Feb 20 00:39:45.429: INFO: Number of running nodes: 0, number of available pods: 1
Feb 20 00:39:46.438: INFO: Number of nodes with available pods: 0
Feb 20 00:39:46.439: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb 20 00:39:46.465: INFO: Number of nodes with available pods: 0
Feb 20 00:39:46.466: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:39:47.473: INFO: Number of nodes with available pods: 0
Feb 20 00:39:47.474: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:39:48.473: INFO: Number of nodes with available pods: 0
Feb 20 00:39:48.473: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:39:49.472: INFO: Number of nodes with available pods: 0
Feb 20 00:39:49.472: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:39:50.476: INFO: Number of nodes with available pods: 0
Feb 20 00:39:50.476: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:39:51.472: INFO: Number of nodes with available pods: 0
Feb 20 00:39:51.473: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:39:52.473: INFO: Number of nodes with available pods: 0
Feb 20 00:39:52.473: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:39:53.470: INFO: Number of nodes with available pods: 0
Feb 20 00:39:53.470: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:39:54.476: INFO: Number of nodes with available pods: 0
Feb 20 00:39:54.477: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:39:55.470: INFO: Number of nodes with available pods: 0
Feb 20 00:39:55.471: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:39:56.471: INFO: Number of nodes with available pods: 0
Feb 20 00:39:56.472: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:39:57.474: INFO: Number of nodes with available pods: 0
Feb 20 00:39:57.474: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:39:58.479: INFO: Number of nodes with available pods: 0
Feb 20 00:39:58.479: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:39:59.473: INFO: Number of nodes with available pods: 0
Feb 20 00:39:59.473: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:40:00.479: INFO: Number of nodes with available pods: 0
Feb 20 00:40:00.479: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:40:01.474: INFO: Number of nodes with available pods: 0
Feb 20 00:40:01.474: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:40:02.662: INFO: Number of nodes with available pods: 0
Feb 20 00:40:02.662: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:40:03.475: INFO: Number of nodes with available pods: 0
Feb 20 00:40:03.475: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:40:04.478: INFO: Number of nodes with available pods: 0
Feb 20 00:40:04.478: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:40:05.473: INFO: Number of nodes with available pods: 0
Feb 20 00:40:05.473: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:40:06.475: INFO: Number of nodes with available pods: 0
Feb 20 00:40:06.475: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:40:07.484: INFO: Number of nodes with available pods: 0
Feb 20 00:40:07.485: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:40:08.489: INFO: Number of nodes with available pods: 0
Feb 20 00:40:08.489: INFO: Node jerma-node is running more than one daemon pod
Feb 20 00:40:09.476: INFO: Number of nodes with available pods: 1
Feb 20 00:40:09.476: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4740, will wait for the garbage collector to delete the pods
Feb 20 00:40:09.550: INFO: Deleting DaemonSet.extensions daemon-set took: 10.018891ms
Feb 20 00:40:09.851: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.058991ms
Feb 20 00:40:22.460: INFO: Number of nodes with available pods: 0
Feb 20 00:40:22.460: INFO: Number of running nodes: 0, number of available pods: 0
Feb 20 00:40:22.466: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4740/daemonsets","resourceVersion":"9505660"},"items":null}

Feb 20 00:40:22.497: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4740/pods","resourceVersion":"9505660"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:40:22.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4740" for this suite.

• [SLOW TEST:44.462 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":280,"completed":137,"skipped":2037,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:40:22.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 20 00:40:22.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Feb 20 00:40:25.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7868 create -f -'
Feb 20 00:40:28.926: INFO: stderr: ""
Feb 20 00:40:28.926: INFO: stdout: "e2e-test-crd-publish-openapi-4124-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Feb 20 00:40:28.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7868 delete e2e-test-crd-publish-openapi-4124-crds test-cr'
Feb 20 00:40:29.142: INFO: stderr: ""
Feb 20 00:40:29.142: INFO: stdout: "e2e-test-crd-publish-openapi-4124-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Feb 20 00:40:29.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7868 apply -f -'
Feb 20 00:40:29.665: INFO: stderr: ""
Feb 20 00:40:29.665: INFO: stdout: "e2e-test-crd-publish-openapi-4124-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Feb 20 00:40:29.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7868 delete e2e-test-crd-publish-openapi-4124-crds test-cr'
Feb 20 00:40:29.845: INFO: stderr: ""
Feb 20 00:40:29.845: INFO: stdout: "e2e-test-crd-publish-openapi-4124-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Feb 20 00:40:29.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4124-crds'
Feb 20 00:40:30.157: INFO: stderr: ""
Feb 20 00:40:30.157: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4124-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:40:33.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7868" for this suite.

• [SLOW TEST:11.236 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":280,"completed":138,"skipped":2052,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:40:33.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Feb 20 00:40:33.955: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:40:46.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9906" for this suite.

• [SLOW TEST:12.703 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":280,"completed":139,"skipped":2086,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:40:46.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 20 00:40:46.670: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6638fadb-355b-4754-973a-786d9938fd9c" in namespace "projected-7137" to be "success or failure"
Feb 20 00:40:46.680: INFO: Pod "downwardapi-volume-6638fadb-355b-4754-973a-786d9938fd9c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.615487ms
Feb 20 00:40:48.687: INFO: Pod "downwardapi-volume-6638fadb-355b-4754-973a-786d9938fd9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016868498s
Feb 20 00:40:50.703: INFO: Pod "downwardapi-volume-6638fadb-355b-4754-973a-786d9938fd9c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033176182s
Feb 20 00:40:52.710: INFO: Pod "downwardapi-volume-6638fadb-355b-4754-973a-786d9938fd9c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039562099s
Feb 20 00:40:54.717: INFO: Pod "downwardapi-volume-6638fadb-355b-4754-973a-786d9938fd9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046461967s
STEP: Saw pod success
Feb 20 00:40:54.717: INFO: Pod "downwardapi-volume-6638fadb-355b-4754-973a-786d9938fd9c" satisfied condition "success or failure"
Feb 20 00:40:54.719: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-6638fadb-355b-4754-973a-786d9938fd9c container client-container: 
STEP: delete the pod
Feb 20 00:40:55.015: INFO: Waiting for pod downwardapi-volume-6638fadb-355b-4754-973a-786d9938fd9c to disappear
Feb 20 00:40:55.019: INFO: Pod downwardapi-volume-6638fadb-355b-4754-973a-786d9938fd9c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:40:55.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7137" for this suite.

• [SLOW TEST:8.546 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":140,"skipped":2089,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:40:55.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name cm-test-opt-del-12aad4d0-55ab-4249-9eb7-6851943bc00e
STEP: Creating configMap with name cm-test-opt-upd-49348ad3-e6af-4e41-9799-b482e5840133
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-12aad4d0-55ab-4249-9eb7-6851943bc00e
STEP: Updating configmap cm-test-opt-upd-49348ad3-e6af-4e41-9799-b482e5840133
STEP: Creating configMap with name cm-test-opt-create-17356b1e-d927-4f4e-8e40-015e8bcdafa8
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:41:07.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8101" for this suite.

• [SLOW TEST:12.716 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":141,"skipped":2100,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:41:07.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-2435
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 20 00:41:07.878: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Feb 20 00:41:08.013: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 20 00:41:10.199: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 20 00:41:12.022: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 20 00:41:15.322: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 20 00:41:16.353: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 20 00:41:18.060: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 20 00:41:20.047: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 20 00:41:22.020: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 20 00:41:24.025: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 20 00:41:26.021: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 20 00:41:28.032: INFO: The status of Pod netserver-0 is Running (Ready = true)
Feb 20 00:41:28.038: INFO: The status of Pod netserver-1 is Running (Ready = false)
Feb 20 00:41:30.043: INFO: The status of Pod netserver-1 is Running (Ready = false)
Feb 20 00:41:32.045: INFO: The status of Pod netserver-1 is Running (Ready = false)
Feb 20 00:41:34.048: INFO: The status of Pod netserver-1 is Running (Ready = false)
Feb 20 00:41:36.050: INFO: The status of Pod netserver-1 is Running (Ready = false)
Feb 20 00:41:38.046: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Feb 20 00:41:44.088: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=udp&host=10.44.0.2&port=8081&tries=1'] Namespace:pod-network-test-2435 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 20 00:41:44.088: INFO: >>> kubeConfig: /root/.kube/config
I0220 00:41:44.166869       9 log.go:172] (0xc002b78420) (0xc001031a40) Create stream
I0220 00:41:44.167212       9 log.go:172] (0xc002b78420) (0xc001031a40) Stream added, broadcasting: 1
I0220 00:41:44.171315       9 log.go:172] (0xc002b78420) Reply frame received for 1
I0220 00:41:44.171362       9 log.go:172] (0xc002b78420) (0xc0001a1cc0) Create stream
I0220 00:41:44.171378       9 log.go:172] (0xc002b78420) (0xc0001a1cc0) Stream added, broadcasting: 3
I0220 00:41:44.172606       9 log.go:172] (0xc002b78420) Reply frame received for 3
I0220 00:41:44.172643       9 log.go:172] (0xc002b78420) (0xc0002db9a0) Create stream
I0220 00:41:44.172656       9 log.go:172] (0xc002b78420) (0xc0002db9a0) Stream added, broadcasting: 5
I0220 00:41:44.173903       9 log.go:172] (0xc002b78420) Reply frame received for 5
I0220 00:41:44.257441       9 log.go:172] (0xc002b78420) Data frame received for 3
I0220 00:41:44.257568       9 log.go:172] (0xc0001a1cc0) (3) Data frame handling
I0220 00:41:44.257597       9 log.go:172] (0xc0001a1cc0) (3) Data frame sent
I0220 00:41:44.339552       9 log.go:172] (0xc002b78420) Data frame received for 1
I0220 00:41:44.339637       9 log.go:172] (0xc001031a40) (1) Data frame handling
I0220 00:41:44.339655       9 log.go:172] (0xc001031a40) (1) Data frame sent
I0220 00:41:44.339674       9 log.go:172] (0xc002b78420) (0xc001031a40) Stream removed, broadcasting: 1
I0220 00:41:44.339977       9 log.go:172] (0xc002b78420) (0xc0001a1cc0) Stream removed, broadcasting: 3
I0220 00:41:44.340035       9 log.go:172] (0xc002b78420) (0xc0002db9a0) Stream removed, broadcasting: 5
I0220 00:41:44.340060       9 log.go:172] (0xc002b78420) (0xc001031a40) Stream removed, broadcasting: 1
I0220 00:41:44.340135       9 log.go:172] (0xc002b78420) Go away received
I0220 00:41:44.340156       9 log.go:172] (0xc002b78420) (0xc0001a1cc0) Stream removed, broadcasting: 3
I0220 00:41:44.340167       9 log.go:172] (0xc002b78420) (0xc0002db9a0) Stream removed, broadcasting: 5
Feb 20 00:41:44.340: INFO: Waiting for responses: map[]
Feb 20 00:41:44.350: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-2435 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 20 00:41:44.350: INFO: >>> kubeConfig: /root/.kube/config
I0220 00:41:44.393736       9 log.go:172] (0xc002b3e000) (0xc000d47180) Create stream
I0220 00:41:44.393818       9 log.go:172] (0xc002b3e000) (0xc000d47180) Stream added, broadcasting: 1
I0220 00:41:44.397907       9 log.go:172] (0xc002b3e000) Reply frame received for 1
I0220 00:41:44.397939       9 log.go:172] (0xc002b3e000) (0xc0002dbae0) Create stream
I0220 00:41:44.397944       9 log.go:172] (0xc002b3e000) (0xc0002dbae0) Stream added, broadcasting: 3
I0220 00:41:44.399091       9 log.go:172] (0xc002b3e000) Reply frame received for 3
I0220 00:41:44.399158       9 log.go:172] (0xc002b3e000) (0xc000471400) Create stream
I0220 00:41:44.399205       9 log.go:172] (0xc002b3e000) (0xc000471400) Stream added, broadcasting: 5
I0220 00:41:44.400786       9 log.go:172] (0xc002b3e000) Reply frame received for 5
I0220 00:41:44.470977       9 log.go:172] (0xc002b3e000) Data frame received for 3
I0220 00:41:44.471191       9 log.go:172] (0xc0002dbae0) (3) Data frame handling
I0220 00:41:44.471258       9 log.go:172] (0xc0002dbae0) (3) Data frame sent
I0220 00:41:44.569241       9 log.go:172] (0xc002b3e000) (0xc000471400) Stream removed, broadcasting: 5
I0220 00:41:44.569956       9 log.go:172] (0xc002b3e000) Data frame received for 1
I0220 00:41:44.570064       9 log.go:172] (0xc002b3e000) (0xc0002dbae0) Stream removed, broadcasting: 3
I0220 00:41:44.570138       9 log.go:172] (0xc000d47180) (1) Data frame handling
I0220 00:41:44.570165       9 log.go:172] (0xc000d47180) (1) Data frame sent
I0220 00:41:44.570177       9 log.go:172] (0xc002b3e000) (0xc000d47180) Stream removed, broadcasting: 1
I0220 00:41:44.570222       9 log.go:172] (0xc002b3e000) Go away received
I0220 00:41:44.570914       9 log.go:172] (0xc002b3e000) (0xc000d47180) Stream removed, broadcasting: 1
I0220 00:41:44.570953       9 log.go:172] (0xc002b3e000) (0xc0002dbae0) Stream removed, broadcasting: 3
I0220 00:41:44.570967       9 log.go:172] (0xc002b3e000) (0xc000471400) Stream removed, broadcasting: 5
Feb 20 00:41:44.571: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:41:44.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2435" for this suite.

• [SLOW TEST:36.808 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":280,"completed":142,"skipped":2106,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:41:44.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1634
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 20 00:41:44.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-628'
Feb 20 00:41:44.876: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 20 00:41:44.876: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Feb 20 00:41:44.900: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-6wznd]
Feb 20 00:41:44.900: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-6wznd" in namespace "kubectl-628" to be "running and ready"
Feb 20 00:41:44.903: INFO: Pod "e2e-test-httpd-rc-6wznd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.47731ms
Feb 20 00:41:46.918: INFO: Pod "e2e-test-httpd-rc-6wznd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017619486s
Feb 20 00:41:48.928: INFO: Pod "e2e-test-httpd-rc-6wznd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02779394s
Feb 20 00:41:51.329: INFO: Pod "e2e-test-httpd-rc-6wznd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.429326611s
Feb 20 00:41:53.650: INFO: Pod "e2e-test-httpd-rc-6wznd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.749872751s
Feb 20 00:41:55.657: INFO: Pod "e2e-test-httpd-rc-6wznd": Phase="Running", Reason="", readiness=true. Elapsed: 10.75746497s
Feb 20 00:41:55.657: INFO: Pod "e2e-test-httpd-rc-6wznd" satisfied condition "running and ready"
Feb 20 00:41:55.657: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-6wznd]
Feb 20 00:41:55.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-628'
Feb 20 00:41:55.849: INFO: stderr: ""
Feb 20 00:41:55.850: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.3. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.3. Set the 'ServerName' directive globally to suppress this message\n[Thu Feb 20 00:41:52.005347 2020] [mpm_event:notice] [pid 1:tid 140247748303720] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Thu Feb 20 00:41:52.006280 2020] [core:notice] [pid 1:tid 140247748303720] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1639
Feb 20 00:41:55.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-628'
Feb 20 00:41:56.205: INFO: stderr: ""
Feb 20 00:41:56.205: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:41:56.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-628" for this suite.

• [SLOW TEST:11.632 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1630
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image  [Conformance]","total":280,"completed":143,"skipped":2112,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:41:56.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 20 00:41:56.953: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 20 00:41:58.971: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756116, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756116, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756117, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756116, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:42:00.978: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756116, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756116, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756117, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756116, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:42:02.981: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756116, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756116, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756117, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756116, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:42:04.977: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756116, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756116, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756117, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756116, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:42:06.975: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756116, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756116, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756117, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756116, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 20 00:42:10.041: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 20 00:42:10.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4370-crds.webhook.example.com via the AdmissionRegistration API
Feb 20 00:42:10.633: INFO: Waiting for webhook configuration to be ready...
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:42:11.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7576" for this suite.
STEP: Destroying namespace "webhook-7576-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:15.430 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":280,"completed":144,"skipped":2136,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:42:11.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 20 00:42:11.834: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6cf53781-2cbd-488f-83eb-994fc6de0f3a" in namespace "projected-771" to be "success or failure"
Feb 20 00:42:11.856: INFO: Pod "downwardapi-volume-6cf53781-2cbd-488f-83eb-994fc6de0f3a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.285769ms
Feb 20 00:42:13.866: INFO: Pod "downwardapi-volume-6cf53781-2cbd-488f-83eb-994fc6de0f3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031791063s
Feb 20 00:42:15.887: INFO: Pod "downwardapi-volume-6cf53781-2cbd-488f-83eb-994fc6de0f3a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053449514s
Feb 20 00:42:17.896: INFO: Pod "downwardapi-volume-6cf53781-2cbd-488f-83eb-994fc6de0f3a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061952341s
Feb 20 00:42:19.905: INFO: Pod "downwardapi-volume-6cf53781-2cbd-488f-83eb-994fc6de0f3a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070876104s
Feb 20 00:42:21.910: INFO: Pod "downwardapi-volume-6cf53781-2cbd-488f-83eb-994fc6de0f3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075913662s
STEP: Saw pod success
Feb 20 00:42:21.910: INFO: Pod "downwardapi-volume-6cf53781-2cbd-488f-83eb-994fc6de0f3a" satisfied condition "success or failure"
Feb 20 00:42:21.912: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-6cf53781-2cbd-488f-83eb-994fc6de0f3a container client-container: 
STEP: delete the pod
Feb 20 00:42:22.157: INFO: Waiting for pod downwardapi-volume-6cf53781-2cbd-488f-83eb-994fc6de0f3a to disappear
Feb 20 00:42:22.165: INFO: Pod downwardapi-volume-6cf53781-2cbd-488f-83eb-994fc6de0f3a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:42:22.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-771" for this suite.

• [SLOW TEST:10.536 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":145,"skipped":2159,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:42:22.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 20 00:42:29.476: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:42:29.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1690" for this suite.

• [SLOW TEST:7.356 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":146,"skipped":2193,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:42:29.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-063d9689-36d2-4664-8a04-6d4577fd5430
STEP: Creating a pod to test consume configMaps
Feb 20 00:42:29.793: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0f114153-c167-4227-b048-8124455626de" in namespace "projected-3123" to be "success or failure"
Feb 20 00:42:29.801: INFO: Pod "pod-projected-configmaps-0f114153-c167-4227-b048-8124455626de": Phase="Pending", Reason="", readiness=false. Elapsed: 8.200722ms
Feb 20 00:42:31.816: INFO: Pod "pod-projected-configmaps-0f114153-c167-4227-b048-8124455626de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022736902s
Feb 20 00:42:33.859: INFO: Pod "pod-projected-configmaps-0f114153-c167-4227-b048-8124455626de": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06634921s
Feb 20 00:42:35.891: INFO: Pod "pod-projected-configmaps-0f114153-c167-4227-b048-8124455626de": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098025026s
Feb 20 00:42:37.911: INFO: Pod "pod-projected-configmaps-0f114153-c167-4227-b048-8124455626de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.117996672s
STEP: Saw pod success
Feb 20 00:42:37.911: INFO: Pod "pod-projected-configmaps-0f114153-c167-4227-b048-8124455626de" satisfied condition "success or failure"
Feb 20 00:42:37.914: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-0f114153-c167-4227-b048-8124455626de container projected-configmap-volume-test: 
STEP: delete the pod
Feb 20 00:42:37.969: INFO: Waiting for pod pod-projected-configmaps-0f114153-c167-4227-b048-8124455626de to disappear
Feb 20 00:42:37.983: INFO: Pod pod-projected-configmaps-0f114153-c167-4227-b048-8124455626de no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:42:37.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3123" for this suite.

• [SLOW TEST:8.444 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":280,"completed":147,"skipped":2230,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:42:37.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 20 00:42:46.799: INFO: Successfully updated pod "pod-update-a282ed96-ef82-46be-bc76-e2c456d75b52"
STEP: verifying the updated pod is in kubernetes
Feb 20 00:42:46.817: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:42:46.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2628" for this suite.

• [SLOW TEST:8.838 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":280,"completed":148,"skipped":2251,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:42:46.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 20 00:42:46.949: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:42:48.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6013" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":280,"completed":149,"skipped":2256,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:42:48.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-7189
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-7189
STEP: Creating statefulset with conflicting port in namespace statefulset-7189
STEP: Waiting until pod test-pod will start running in namespace statefulset-7189
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7189
Feb 20 00:43:00.460: INFO: Observed stateful pod in namespace: statefulset-7189, name: ss-0, uid: 6ca9713c-619c-41fc-9e27-8e91a1b596b0, status phase: Pending. Waiting for statefulset controller to delete.
Feb 20 00:43:03.062: INFO: Observed stateful pod in namespace: statefulset-7189, name: ss-0, uid: 6ca9713c-619c-41fc-9e27-8e91a1b596b0, status phase: Failed. Waiting for statefulset controller to delete.
Feb 20 00:43:03.088: INFO: Observed stateful pod in namespace: statefulset-7189, name: ss-0, uid: 6ca9713c-619c-41fc-9e27-8e91a1b596b0, status phase: Failed. Waiting for statefulset controller to delete.
Feb 20 00:43:03.181: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7189
STEP: Removing pod with conflicting port in namespace statefulset-7189
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7189 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Feb 20 00:43:13.945: INFO: Deleting all statefulset in ns statefulset-7189
Feb 20 00:43:13.948: INFO: Scaling statefulset ss to 0
Feb 20 00:43:23.978: INFO: Waiting for statefulset status.replicas updated to 0
Feb 20 00:43:23.983: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:43:24.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7189" for this suite.

• [SLOW TEST:35.855 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":280,"completed":150,"skipped":2289,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:43:24.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 20 00:43:24.385: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6b442c54-8929-4c59-855c-2443f7f6f40b" in namespace "downward-api-2289" to be "success or failure"
Feb 20 00:43:24.391: INFO: Pod "downwardapi-volume-6b442c54-8929-4c59-855c-2443f7f6f40b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.465558ms
Feb 20 00:43:26.407: INFO: Pod "downwardapi-volume-6b442c54-8929-4c59-855c-2443f7f6f40b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021854613s
Feb 20 00:43:28.415: INFO: Pod "downwardapi-volume-6b442c54-8929-4c59-855c-2443f7f6f40b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029834897s
Feb 20 00:43:30.424: INFO: Pod "downwardapi-volume-6b442c54-8929-4c59-855c-2443f7f6f40b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038597525s
Feb 20 00:43:32.429: INFO: Pod "downwardapi-volume-6b442c54-8929-4c59-855c-2443f7f6f40b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.043386449s
STEP: Saw pod success
Feb 20 00:43:32.429: INFO: Pod "downwardapi-volume-6b442c54-8929-4c59-855c-2443f7f6f40b" satisfied condition "success or failure"
Feb 20 00:43:32.440: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-6b442c54-8929-4c59-855c-2443f7f6f40b container client-container: 
STEP: delete the pod
Feb 20 00:43:32.493: INFO: Waiting for pod downwardapi-volume-6b442c54-8929-4c59-855c-2443f7f6f40b to disappear
Feb 20 00:43:32.532: INFO: Pod downwardapi-volume-6b442c54-8929-4c59-855c-2443f7f6f40b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:43:32.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2289" for this suite.

• [SLOW TEST:8.423 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":280,"completed":151,"skipped":2291,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:43:32.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test override all
Feb 20 00:43:32.669: INFO: Waiting up to 5m0s for pod "client-containers-7184e3eb-7dc5-4a75-be8b-7a63e1f521ac" in namespace "containers-2379" to be "success or failure"
Feb 20 00:43:32.701: INFO: Pod "client-containers-7184e3eb-7dc5-4a75-be8b-7a63e1f521ac": Phase="Pending", Reason="", readiness=false. Elapsed: 30.793934ms
Feb 20 00:43:34.708: INFO: Pod "client-containers-7184e3eb-7dc5-4a75-be8b-7a63e1f521ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038087264s
Feb 20 00:43:36.713: INFO: Pod "client-containers-7184e3eb-7dc5-4a75-be8b-7a63e1f521ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043417656s
Feb 20 00:43:38.720: INFO: Pod "client-containers-7184e3eb-7dc5-4a75-be8b-7a63e1f521ac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049892242s
Feb 20 00:43:40.737: INFO: Pod "client-containers-7184e3eb-7dc5-4a75-be8b-7a63e1f521ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066992854s
STEP: Saw pod success
Feb 20 00:43:40.737: INFO: Pod "client-containers-7184e3eb-7dc5-4a75-be8b-7a63e1f521ac" satisfied condition "success or failure"
Feb 20 00:43:40.741: INFO: Trying to get logs from node jerma-node pod client-containers-7184e3eb-7dc5-4a75-be8b-7a63e1f521ac container test-container: 
STEP: delete the pod
Feb 20 00:43:40.796: INFO: Waiting for pod client-containers-7184e3eb-7dc5-4a75-be8b-7a63e1f521ac to disappear
Feb 20 00:43:40.800: INFO: Pod client-containers-7184e3eb-7dc5-4a75-be8b-7a63e1f521ac no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:43:40.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2379" for this suite.

• [SLOW TEST:8.262 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":280,"completed":152,"skipped":2301,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:43:40.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating service endpoint-test2 in namespace services-8677
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8677 to expose endpoints map[]
Feb 20 00:43:41.026: INFO: successfully validated that service endpoint-test2 in namespace services-8677 exposes endpoints map[] (26.161079ms elapsed)
STEP: Creating pod pod1 in namespace services-8677
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8677 to expose endpoints map[pod1:[80]]
Feb 20 00:43:45.164: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.074836656s elapsed, will retry)
Feb 20 00:43:48.254: INFO: successfully validated that service endpoint-test2 in namespace services-8677 exposes endpoints map[pod1:[80]] (7.164249942s elapsed)
STEP: Creating pod pod2 in namespace services-8677
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8677 to expose endpoints map[pod1:[80] pod2:[80]]
Feb 20 00:43:53.351: INFO: Unexpected endpoints: found map[e7d89aa6-c3f0-4bb1-ac17-e0b97d0e8914:[80]], expected map[pod1:[80] pod2:[80]] (5.090329398s elapsed, will retry)
Feb 20 00:43:56.400: INFO: successfully validated that service endpoint-test2 in namespace services-8677 exposes endpoints map[pod1:[80] pod2:[80]] (8.138742018s elapsed)
STEP: Deleting pod pod1 in namespace services-8677
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8677 to expose endpoints map[pod2:[80]]
Feb 20 00:43:56.513: INFO: successfully validated that service endpoint-test2 in namespace services-8677 exposes endpoints map[pod2:[80]] (103.502546ms elapsed)
STEP: Deleting pod pod2 in namespace services-8677
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8677 to expose endpoints map[]
Feb 20 00:43:56.565: INFO: successfully validated that service endpoint-test2 in namespace services-8677 exposes endpoints map[] (8.8922ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:43:56.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8677" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:15.894 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":280,"completed":153,"skipped":2328,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:43:56.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0220 00:43:57.627051       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 20 00:43:57.627: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:43:57.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3716" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":280,"completed":154,"skipped":2340,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:43:57.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating Agnhost RC
Feb 20 00:43:59.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5979'
Feb 20 00:44:01.258: INFO: stderr: ""
Feb 20 00:44:01.258: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Feb 20 00:44:02.524: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 20 00:44:02.525: INFO: Found 0 / 1
Feb 20 00:44:04.040: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 20 00:44:04.040: INFO: Found 0 / 1
Feb 20 00:44:04.283: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 20 00:44:04.283: INFO: Found 0 / 1
Feb 20 00:44:05.333: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 20 00:44:05.333: INFO: Found 0 / 1
Feb 20 00:44:06.266: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 20 00:44:06.266: INFO: Found 0 / 1
Feb 20 00:44:07.265: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 20 00:44:07.265: INFO: Found 0 / 1
Feb 20 00:44:08.266: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 20 00:44:08.267: INFO: Found 0 / 1
Feb 20 00:44:10.455: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 20 00:44:10.455: INFO: Found 0 / 1
Feb 20 00:44:11.265: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 20 00:44:11.265: INFO: Found 0 / 1
Feb 20 00:44:12.265: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 20 00:44:12.265: INFO: Found 0 / 1
Feb 20 00:44:13.267: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 20 00:44:13.267: INFO: Found 0 / 1
Feb 20 00:44:14.264: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 20 00:44:14.264: INFO: Found 0 / 1
Feb 20 00:44:15.266: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 20 00:44:15.266: INFO: Found 0 / 1
Feb 20 00:44:16.267: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 20 00:44:16.267: INFO: Found 1 / 1
Feb 20 00:44:16.267: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb 20 00:44:16.271: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 20 00:44:16.272: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 20 00:44:16.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-wsfp6 --namespace=kubectl-5979 -p {"metadata":{"annotations":{"x":"y"}}}'
Feb 20 00:44:16.496: INFO: stderr: ""
Feb 20 00:44:16.496: INFO: stdout: "pod/agnhost-master-wsfp6 patched\n"
STEP: checking annotations
Feb 20 00:44:16.516: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 20 00:44:16.516: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:44:16.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5979" for this suite.

• [SLOW TEST:18.926 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1541
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":280,"completed":155,"skipped":2398,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:44:16.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1240.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1240.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1240.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1240.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 20 00:44:28.717: INFO: DNS probes using dns-test-9a65bb20-8088-4cfd-94cf-a498c5345f98 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1240.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1240.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1240.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1240.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 20 00:44:40.889: INFO: File jessie_udp@dns-test-service-3.dns-1240.svc.cluster.local from pod  dns-1240/dns-test-216dab17-b0a1-436a-8503-5f19cf9818f4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 20 00:44:40.889: INFO: Lookups using dns-1240/dns-test-216dab17-b0a1-436a-8503-5f19cf9818f4 failed for: [jessie_udp@dns-test-service-3.dns-1240.svc.cluster.local]

Feb 20 00:44:46.240: INFO: DNS probes using dns-test-216dab17-b0a1-436a-8503-5f19cf9818f4 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1240.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1240.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1240.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-1240.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 20 00:44:58.497: INFO: DNS probes using dns-test-384dfe4a-a74f-4a6d-8e02-b3f59c28ebe1 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:44:58.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1240" for this suite.

• [SLOW TEST:42.090 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":280,"completed":156,"skipped":2402,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:44:58.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Feb 20 00:44:58.744: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:45:15.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2355" for this suite.

• [SLOW TEST:16.436 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":280,"completed":157,"skipped":2409,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:45:15.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name secret-emptykey-test-1394b27d-a4ed-47fe-9e62-07815dcfd233
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:45:15.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7077" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":280,"completed":158,"skipped":2423,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:45:15.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 20 00:45:25.738: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:45:25.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-559" for this suite.

• [SLOW TEST:10.579 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":159,"skipped":2457,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:45:25.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:45:34.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3122" for this suite.

• [SLOW TEST:8.207 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":160,"skipped":2458,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:45:34.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2711.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-2711.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2711.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-2711.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2711.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2711.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-2711.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2711.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-2711.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2711.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 20 00:45:44.236: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:44.240: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:44.242: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:44.245: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:44.260: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:44.263: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:44.267: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:44.269: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:44.275: INFO: Lookups using dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2711.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2711.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local jessie_udp@dns-test-service-2.dns-2711.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2711.svc.cluster.local]

Feb 20 00:45:49.285: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:49.293: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:49.299: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:49.305: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:49.324: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:49.381: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:49.389: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:49.400: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:49.415: INFO: Lookups using dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2711.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2711.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local jessie_udp@dns-test-service-2.dns-2711.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2711.svc.cluster.local]

Feb 20 00:45:54.287: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:54.292: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:54.295: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:54.310: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:54.349: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:54.355: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:54.358: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:54.362: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:54.370: INFO: Lookups using dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2711.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2711.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local jessie_udp@dns-test-service-2.dns-2711.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2711.svc.cluster.local]

Feb 20 00:45:59.286: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:59.290: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:59.294: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:59.300: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:59.326: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:59.335: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:59.341: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:59.347: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:45:59.373: INFO: Lookups using dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2711.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2711.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local jessie_udp@dns-test-service-2.dns-2711.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2711.svc.cluster.local]

Feb 20 00:46:04.281: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:46:04.285: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:46:04.288: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:46:04.291: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:46:04.300: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:46:04.304: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:46:04.307: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:46:04.311: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:46:04.322: INFO: Lookups using dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2711.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2711.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local jessie_udp@dns-test-service-2.dns-2711.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2711.svc.cluster.local]

Feb 20 00:46:09.284: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:46:09.297: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:46:09.315: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:46:09.322: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:46:09.336: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:46:09.342: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:46:09.347: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:46:09.353: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2711.svc.cluster.local from pod dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d: the server could not find the requested resource (get pods dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d)
Feb 20 00:46:09.364: INFO: Lookups using dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2711.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2711.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2711.svc.cluster.local jessie_udp@dns-test-service-2.dns-2711.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2711.svc.cluster.local]

Feb 20 00:46:14.414: INFO: DNS probes using dns-2711/dns-test-f0f79df2-ee53-4e1f-b0d1-6b1edc1b281d succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:46:14.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2711" for this suite.

• [SLOW TEST:40.584 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":280,"completed":161,"skipped":2534,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:46:14.632: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:46:25.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6151" for this suite.

• [SLOW TEST:11.345 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":280,"completed":162,"skipped":2577,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:46:25.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5173
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-5173
STEP: creating replication controller externalsvc in namespace services-5173
I0220 00:46:26.253130       9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-5173, replica count: 2
I0220 00:46:29.304148       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 00:46:32.305079       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 00:46:35.305988       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 00:46:38.307117       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Feb 20 00:46:38.344: INFO: Creating new exec pod
Feb 20 00:46:46.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5173 execpodp64tc -- /bin/sh -x -c nslookup clusterip-service'
Feb 20 00:46:46.966: INFO: stderr: "I0220 00:46:46.682924    2446 log.go:172] (0xc0006dc630) (0xc0006a8000) Create stream\nI0220 00:46:46.683321    2446 log.go:172] (0xc0006dc630) (0xc0006a8000) Stream added, broadcasting: 1\nI0220 00:46:46.688170    2446 log.go:172] (0xc0006dc630) Reply frame received for 1\nI0220 00:46:46.688284    2446 log.go:172] (0xc0006dc630) (0xc0006ad7c0) Create stream\nI0220 00:46:46.688314    2446 log.go:172] (0xc0006dc630) (0xc0006ad7c0) Stream added, broadcasting: 3\nI0220 00:46:46.690383    2446 log.go:172] (0xc0006dc630) Reply frame received for 3\nI0220 00:46:46.690427    2446 log.go:172] (0xc0006dc630) (0xc0006a8140) Create stream\nI0220 00:46:46.690439    2446 log.go:172] (0xc0006dc630) (0xc0006a8140) Stream added, broadcasting: 5\nI0220 00:46:46.695823    2446 log.go:172] (0xc0006dc630) Reply frame received for 5\nI0220 00:46:46.830629    2446 log.go:172] (0xc0006dc630) Data frame received for 5\nI0220 00:46:46.830715    2446 log.go:172] (0xc0006a8140) (5) Data frame handling\nI0220 00:46:46.830767    2446 log.go:172] (0xc0006a8140) (5) Data frame sent\n+ nslookup clusterip-service\nI0220 00:46:46.857841    2446 log.go:172] (0xc0006dc630) Data frame received for 3\nI0220 00:46:46.858244    2446 log.go:172] (0xc0006ad7c0) (3) Data frame handling\nI0220 00:46:46.858278    2446 log.go:172] (0xc0006ad7c0) (3) Data frame sent\nI0220 00:46:46.863028    2446 log.go:172] (0xc0006dc630) Data frame received for 3\nI0220 00:46:46.863120    2446 log.go:172] (0xc0006ad7c0) (3) Data frame handling\nI0220 00:46:46.863221    2446 log.go:172] (0xc0006ad7c0) (3) Data frame sent\nI0220 00:46:46.955037    2446 log.go:172] (0xc0006dc630) Data frame received for 1\nI0220 00:46:46.955122    2446 log.go:172] (0xc0006dc630) (0xc0006ad7c0) Stream removed, broadcasting: 3\nI0220 00:46:46.955188    2446 log.go:172] (0xc0006a8000) (1) Data frame handling\nI0220 00:46:46.955215    2446 log.go:172] (0xc0006a8000) (1) Data frame sent\nI0220 00:46:46.955239    2446 log.go:172] (0xc0006dc630) (0xc0006a8140) Stream removed, broadcasting: 5\nI0220 00:46:46.955277    2446 log.go:172] (0xc0006dc630) (0xc0006a8000) Stream removed, broadcasting: 1\nI0220 00:46:46.955296    2446 log.go:172] (0xc0006dc630) Go away received\nI0220 00:46:46.956650    2446 log.go:172] (0xc0006dc630) (0xc0006a8000) Stream removed, broadcasting: 1\nI0220 00:46:46.956660    2446 log.go:172] (0xc0006dc630) (0xc0006ad7c0) Stream removed, broadcasting: 3\nI0220 00:46:46.956666    2446 log.go:172] (0xc0006dc630) (0xc0006a8140) Stream removed, broadcasting: 5\n"
Feb 20 00:46:46.966: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-5173.svc.cluster.local\tcanonical name = externalsvc.services-5173.svc.cluster.local.\nName:\texternalsvc.services-5173.svc.cluster.local\nAddress: 10.96.12.244\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-5173, will wait for the garbage collector to delete the pods
Feb 20 00:46:47.044: INFO: Deleting ReplicationController externalsvc took: 17.284285ms
Feb 20 00:46:47.445: INFO: Terminating ReplicationController externalsvc pods took: 401.000835ms
Feb 20 00:47:03.240: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:47:03.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5173" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:37.324 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":280,"completed":163,"skipped":2588,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:47:03.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-8721
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 20 00:47:03.396: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Feb 20 00:47:03.445: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 20 00:47:05.451: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 20 00:47:07.452: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 20 00:47:10.326: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 20 00:47:11.542: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 20 00:47:13.564: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 20 00:47:15.453: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 20 00:47:17.453: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 20 00:47:19.454: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 20 00:47:21.452: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 20 00:47:23.454: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 20 00:47:25.454: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 20 00:47:27.453: INFO: The status of Pod netserver-0 is Running (Ready = true)
Feb 20 00:47:27.461: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Feb 20 00:47:37.614: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8721 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 20 00:47:37.614: INFO: >>> kubeConfig: /root/.kube/config
I0220 00:47:37.696052       9 log.go:172] (0xc0021b6dc0) (0xc001526640) Create stream
I0220 00:47:37.696350       9 log.go:172] (0xc0021b6dc0) (0xc001526640) Stream added, broadcasting: 1
I0220 00:47:37.702151       9 log.go:172] (0xc0021b6dc0) Reply frame received for 1
I0220 00:47:37.702266       9 log.go:172] (0xc0021b6dc0) (0xc001073540) Create stream
I0220 00:47:37.702292       9 log.go:172] (0xc0021b6dc0) (0xc001073540) Stream added, broadcasting: 3
I0220 00:47:37.704014       9 log.go:172] (0xc0021b6dc0) Reply frame received for 3
I0220 00:47:37.704065       9 log.go:172] (0xc0021b6dc0) (0xc001073680) Create stream
I0220 00:47:37.704083       9 log.go:172] (0xc0021b6dc0) (0xc001073680) Stream added, broadcasting: 5
I0220 00:47:37.706166       9 log.go:172] (0xc0021b6dc0) Reply frame received for 5
I0220 00:47:37.827667       9 log.go:172] (0xc0021b6dc0) Data frame received for 3
I0220 00:47:37.827902       9 log.go:172] (0xc001073540) (3) Data frame handling
I0220 00:47:37.827969       9 log.go:172] (0xc001073540) (3) Data frame sent
I0220 00:47:37.953848       9 log.go:172] (0xc0021b6dc0) (0xc001073680) Stream removed, broadcasting: 5
I0220 00:47:37.954205       9 log.go:172] (0xc0021b6dc0) (0xc001073540) Stream removed, broadcasting: 3
I0220 00:47:37.954534       9 log.go:172] (0xc0021b6dc0) Data frame received for 1
I0220 00:47:37.954576       9 log.go:172] (0xc001526640) (1) Data frame handling
I0220 00:47:37.954907       9 log.go:172] (0xc001526640) (1) Data frame sent
I0220 00:47:37.955150       9 log.go:172] (0xc0021b6dc0) (0xc001526640) Stream removed, broadcasting: 1
I0220 00:47:37.955235       9 log.go:172] (0xc0021b6dc0) Go away received
I0220 00:47:37.955701       9 log.go:172] (0xc0021b6dc0) (0xc001526640) Stream removed, broadcasting: 1
I0220 00:47:37.955727       9 log.go:172] (0xc0021b6dc0) (0xc001073540) Stream removed, broadcasting: 3
I0220 00:47:37.955776       9 log.go:172] (0xc0021b6dc0) (0xc001073680) Stream removed, broadcasting: 5
Feb 20 00:47:37.955: INFO: Found all expected endpoints: [netserver-0]
Feb 20 00:47:37.962: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8721 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 20 00:47:37.962: INFO: >>> kubeConfig: /root/.kube/config
I0220 00:47:38.016127       9 log.go:172] (0xc002ae8790) (0xc000b3ea00) Create stream
I0220 00:47:38.016362       9 log.go:172] (0xc002ae8790) (0xc000b3ea00) Stream added, broadcasting: 1
I0220 00:47:38.027539       9 log.go:172] (0xc002ae8790) Reply frame received for 1
I0220 00:47:38.027678       9 log.go:172] (0xc002ae8790) (0xc00224c320) Create stream
I0220 00:47:38.027703       9 log.go:172] (0xc002ae8790) (0xc00224c320) Stream added, broadcasting: 3
I0220 00:47:38.030012       9 log.go:172] (0xc002ae8790) Reply frame received for 3
I0220 00:47:38.030048       9 log.go:172] (0xc002ae8790) (0xc000e26be0) Create stream
I0220 00:47:38.030062       9 log.go:172] (0xc002ae8790) (0xc000e26be0) Stream added, broadcasting: 5
I0220 00:47:38.032681       9 log.go:172] (0xc002ae8790) Reply frame received for 5
I0220 00:47:38.113830       9 log.go:172] (0xc002ae8790) Data frame received for 3
I0220 00:47:38.113901       9 log.go:172] (0xc00224c320) (3) Data frame handling
I0220 00:47:38.113921       9 log.go:172] (0xc00224c320) (3) Data frame sent
I0220 00:47:38.190939       9 log.go:172] (0xc002ae8790) Data frame received for 1
I0220 00:47:38.191027       9 log.go:172] (0xc000b3ea00) (1) Data frame handling
I0220 00:47:38.191048       9 log.go:172] (0xc000b3ea00) (1) Data frame sent
I0220 00:47:38.191453       9 log.go:172] (0xc002ae8790) (0xc000e26be0) Stream removed, broadcasting: 5
I0220 00:47:38.191753       9 log.go:172] (0xc002ae8790) (0xc000b3ea00) Stream removed, broadcasting: 1
I0220 00:47:38.192205       9 log.go:172] (0xc002ae8790) (0xc00224c320) Stream removed, broadcasting: 3
I0220 00:47:38.192226       9 log.go:172] (0xc002ae8790) Go away received
I0220 00:47:38.192315       9 log.go:172] (0xc002ae8790) (0xc000b3ea00) Stream removed, broadcasting: 1
I0220 00:47:38.192332       9 log.go:172] (0xc002ae8790) (0xc00224c320) Stream removed, broadcasting: 3
I0220 00:47:38.192343       9 log.go:172] (0xc002ae8790) (0xc000e26be0) Stream removed, broadcasting: 5
Feb 20 00:47:38.192: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:47:38.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8721" for this suite.

• [SLOW TEST:34.908 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":164,"skipped":2595,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:47:38.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 20 00:47:38.673: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"1250bca0-c959-4001-b8c2-a8fafd105766", Controller:(*bool)(0xc0059776b2), BlockOwnerDeletion:(*bool)(0xc0059776b3)}}
Feb 20 00:47:38.700: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"def096b3-41c7-4f6c-8ce0-c5bac352f3ae", Controller:(*bool)(0xc000927eca), BlockOwnerDeletion:(*bool)(0xc000927ecb)}}
Feb 20 00:47:38.714: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"03776109-cb93-4dce-acbc-9651547ecaa5", Controller:(*bool)(0xc00326a11a), BlockOwnerDeletion:(*bool)(0xc00326a11b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:47:44.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2799" for this suite.

• [SLOW TEST:6.626 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":280,"completed":165,"skipped":2604,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:47:44.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service externalname-service with the type=ExternalName in namespace services-259
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-259
I0220 00:47:45.368006       9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-259, replica count: 2
I0220 00:47:48.419439       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 00:47:51.420114       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 00:47:54.420776       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 00:47:57.421333       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 00:48:00.421884       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 20 00:48:00.422: INFO: Creating new exec pod
Feb 20 00:48:09.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-259 execpod46l7h -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Feb 20 00:48:09.874: INFO: stderr: "I0220 00:48:09.656896    2467 log.go:172] (0xc00093f6b0) (0xc000a686e0) Create stream\nI0220 00:48:09.657136    2467 log.go:172] (0xc00093f6b0) (0xc000a686e0) Stream added, broadcasting: 1\nI0220 00:48:09.666846    2467 log.go:172] (0xc00093f6b0) Reply frame received for 1\nI0220 00:48:09.666918    2467 log.go:172] (0xc00093f6b0) (0xc000638780) Create stream\nI0220 00:48:09.666929    2467 log.go:172] (0xc00093f6b0) (0xc000638780) Stream added, broadcasting: 3\nI0220 00:48:09.669444    2467 log.go:172] (0xc00093f6b0) Reply frame received for 3\nI0220 00:48:09.669514    2467 log.go:172] (0xc00093f6b0) (0xc0003d3400) Create stream\nI0220 00:48:09.669522    2467 log.go:172] (0xc00093f6b0) (0xc0003d3400) Stream added, broadcasting: 5\nI0220 00:48:09.671054    2467 log.go:172] (0xc00093f6b0) Reply frame received for 5\nI0220 00:48:09.748954    2467 log.go:172] (0xc00093f6b0) Data frame received for 5\nI0220 00:48:09.748981    2467 log.go:172] (0xc0003d3400) (5) Data frame handling\nI0220 00:48:09.749005    2467 log.go:172] (0xc0003d3400) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0220 00:48:09.760178    2467 log.go:172] (0xc00093f6b0) Data frame received for 5\nI0220 00:48:09.760200    2467 log.go:172] (0xc0003d3400) (5) Data frame handling\nI0220 00:48:09.760218    2467 log.go:172] (0xc0003d3400) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0220 00:48:09.857582    2467 log.go:172] (0xc00093f6b0) Data frame received for 1\nI0220 00:48:09.857786    2467 log.go:172] (0xc00093f6b0) (0xc0003d3400) Stream removed, broadcasting: 5\nI0220 00:48:09.857852    2467 log.go:172] (0xc000a686e0) (1) Data frame handling\nI0220 00:48:09.857890    2467 log.go:172] (0xc000a686e0) (1) Data frame sent\nI0220 00:48:09.857957    2467 log.go:172] (0xc00093f6b0) (0xc000638780) Stream removed, broadcasting: 3\nI0220 00:48:09.858040    2467 log.go:172] (0xc00093f6b0) (0xc000a686e0) Stream removed, broadcasting: 1\nI0220 00:48:09.858062    2467 log.go:172] (0xc00093f6b0) Go away received\nI0220 00:48:09.859508    2467 log.go:172] (0xc00093f6b0) (0xc000a686e0) Stream removed, broadcasting: 1\nI0220 00:48:09.859568    2467 log.go:172] (0xc00093f6b0) (0xc000638780) Stream removed, broadcasting: 3\nI0220 00:48:09.859641    2467 log.go:172] (0xc00093f6b0) (0xc0003d3400) Stream removed, broadcasting: 5\n"
Feb 20 00:48:09.874: INFO: stdout: ""
Feb 20 00:48:09.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-259 execpod46l7h -- /bin/sh -x -c nc -zv -t -w 2 10.96.234.57 80'
Feb 20 00:48:10.285: INFO: stderr: "I0220 00:48:10.062794    2485 log.go:172] (0xc000931340) (0xc000974960) Create stream\nI0220 00:48:10.062989    2485 log.go:172] (0xc000931340) (0xc000974960) Stream added, broadcasting: 1\nI0220 00:48:10.072457    2485 log.go:172] (0xc000931340) Reply frame received for 1\nI0220 00:48:10.072556    2485 log.go:172] (0xc000931340) (0xc0006ce6e0) Create stream\nI0220 00:48:10.072572    2485 log.go:172] (0xc000931340) (0xc0006ce6e0) Stream added, broadcasting: 3\nI0220 00:48:10.073630    2485 log.go:172] (0xc000931340) Reply frame received for 3\nI0220 00:48:10.073658    2485 log.go:172] (0xc000931340) (0xc00053b360) Create stream\nI0220 00:48:10.073667    2485 log.go:172] (0xc000931340) (0xc00053b360) Stream added, broadcasting: 5\nI0220 00:48:10.074979    2485 log.go:172] (0xc000931340) Reply frame received for 5\nI0220 00:48:10.154345    2485 log.go:172] (0xc000931340) Data frame received for 5\nI0220 00:48:10.154407    2485 log.go:172] (0xc00053b360) (5) Data frame handling\nI0220 00:48:10.154446    2485 log.go:172] (0xc00053b360) (5) Data frame sent\nI0220 00:48:10.154459    2485 log.go:172] (0xc000931340) Data frame received for 5\nI0220 00:48:10.154473    2485 log.go:172] (0xc00053b360) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.234.57 80\nConnection to 10.96.234.57 80 port [tcp/http] succeeded!\nI0220 00:48:10.154526    2485 log.go:172] (0xc00053b360) (5) Data frame sent\nI0220 00:48:10.268490    2485 log.go:172] (0xc000931340) (0xc0006ce6e0) Stream removed, broadcasting: 3\nI0220 00:48:10.268803    2485 log.go:172] (0xc000931340) Data frame received for 1\nI0220 00:48:10.268828    2485 log.go:172] (0xc000974960) (1) Data frame handling\nI0220 00:48:10.268860    2485 log.go:172] (0xc000974960) (1) Data frame sent\nI0220 00:48:10.268885    2485 log.go:172] (0xc000931340) (0xc000974960) Stream removed, broadcasting: 1\nI0220 00:48:10.269532    2485 log.go:172] (0xc000931340) (0xc00053b360) Stream removed, broadcasting: 5\nI0220 00:48:10.269644    2485 log.go:172] (0xc000931340) Go away received\nI0220 00:48:10.270057    2485 log.go:172] (0xc000931340) (0xc000974960) Stream removed, broadcasting: 1\nI0220 00:48:10.270076    2485 log.go:172] (0xc000931340) (0xc0006ce6e0) Stream removed, broadcasting: 3\nI0220 00:48:10.270085    2485 log.go:172] (0xc000931340) (0xc00053b360) Stream removed, broadcasting: 5\n"
Feb 20 00:48:10.285: INFO: stdout: ""
Feb 20 00:48:10.285: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:48:10.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-259" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:25.538 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":280,"completed":166,"skipped":2672,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:48:10.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 20 00:48:10.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Feb 20 00:48:14.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8882 create -f -'
Feb 20 00:48:17.129: INFO: stderr: ""
Feb 20 00:48:17.129: INFO: stdout: "e2e-test-crd-publish-openapi-831-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Feb 20 00:48:17.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8882 delete e2e-test-crd-publish-openapi-831-crds test-foo'
Feb 20 00:48:17.286: INFO: stderr: ""
Feb 20 00:48:17.286: INFO: stdout: "e2e-test-crd-publish-openapi-831-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Feb 20 00:48:17.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8882 apply -f -'
Feb 20 00:48:17.613: INFO: stderr: ""
Feb 20 00:48:17.613: INFO: stdout: "e2e-test-crd-publish-openapi-831-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Feb 20 00:48:17.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8882 delete e2e-test-crd-publish-openapi-831-crds test-foo'
Feb 20 00:48:17.806: INFO: stderr: ""
Feb 20 00:48:17.806: INFO: stdout: "e2e-test-crd-publish-openapi-831-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Feb 20 00:48:17.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8882 create -f -'
Feb 20 00:48:18.244: INFO: rc: 1
Feb 20 00:48:18.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8882 apply -f -'
Feb 20 00:48:18.598: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Feb 20 00:48:18.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8882 create -f -'
Feb 20 00:48:18.914: INFO: rc: 1
Feb 20 00:48:18.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8882 apply -f -'
Feb 20 00:48:19.317: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Feb 20 00:48:19.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-831-crds'
Feb 20 00:48:19.662: INFO: stderr: ""
Feb 20 00:48:19.662: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-831-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Feb 20 00:48:19.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-831-crds.metadata'
Feb 20 00:48:20.042: INFO: stderr: ""
Feb 20 00:48:20.043: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-831-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Feb 20 00:48:20.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-831-crds.spec'
Feb 20 00:48:20.434: INFO: stderr: ""
Feb 20 00:48:20.435: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-831-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Feb 20 00:48:20.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-831-crds.spec.bars'
Feb 20 00:48:20.754: INFO: stderr: ""
Feb 20 00:48:20.754: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-831-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Feb 20 00:48:20.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-831-crds.spec.bars2'
Feb 20 00:48:21.069: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:48:24.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8882" for this suite.

• [SLOW TEST:14.449 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":280,"completed":167,"skipped":2689,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:48:24.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 20 00:48:24.918: INFO: Waiting up to 5m0s for pod "pod-9ee940c5-607f-4b66-a0d8-42d8269f5ce6" in namespace "emptydir-5684" to be "success or failure"
Feb 20 00:48:24.923: INFO: Pod "pod-9ee940c5-607f-4b66-a0d8-42d8269f5ce6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.893925ms
Feb 20 00:48:26.930: INFO: Pod "pod-9ee940c5-607f-4b66-a0d8-42d8269f5ce6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011676775s
Feb 20 00:48:28.940: INFO: Pod "pod-9ee940c5-607f-4b66-a0d8-42d8269f5ce6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021638749s
Feb 20 00:48:30.946: INFO: Pod "pod-9ee940c5-607f-4b66-a0d8-42d8269f5ce6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027316754s
Feb 20 00:48:32.956: INFO: Pod "pod-9ee940c5-607f-4b66-a0d8-42d8269f5ce6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.037804492s
STEP: Saw pod success
Feb 20 00:48:32.956: INFO: Pod "pod-9ee940c5-607f-4b66-a0d8-42d8269f5ce6" satisfied condition "success or failure"
Feb 20 00:48:32.963: INFO: Trying to get logs from node jerma-node pod pod-9ee940c5-607f-4b66-a0d8-42d8269f5ce6 container test-container: 
STEP: delete the pod
Feb 20 00:48:33.026: INFO: Waiting for pod pod-9ee940c5-607f-4b66-a0d8-42d8269f5ce6 to disappear
Feb 20 00:48:33.109: INFO: Pod pod-9ee940c5-607f-4b66-a0d8-42d8269f5ce6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:48:33.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5684" for this suite.

• [SLOW TEST:8.290 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":168,"skipped":2691,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:48:33.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating replication controller my-hostname-basic-85cb6226-9f0c-481a-a229-2c52ae24837e
Feb 20 00:48:33.379: INFO: Pod name my-hostname-basic-85cb6226-9f0c-481a-a229-2c52ae24837e: Found 0 pods out of 1
Feb 20 00:48:38.386: INFO: Pod name my-hostname-basic-85cb6226-9f0c-481a-a229-2c52ae24837e: Found 1 pods out of 1
Feb 20 00:48:38.386: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-85cb6226-9f0c-481a-a229-2c52ae24837e" are running
Feb 20 00:48:40.398: INFO: Pod "my-hostname-basic-85cb6226-9f0c-481a-a229-2c52ae24837e-kr8x4" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-20 00:48:33 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-20 00:48:33 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-85cb6226-9f0c-481a-a229-2c52ae24837e]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-20 00:48:33 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-85cb6226-9f0c-481a-a229-2c52ae24837e]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-20 00:48:33 +0000 UTC Reason: Message:}])
Feb 20 00:48:40.398: INFO: Trying to dial the pod
Feb 20 00:48:45.421: INFO: Controller my-hostname-basic-85cb6226-9f0c-481a-a229-2c52ae24837e: Got expected result from replica 1 [my-hostname-basic-85cb6226-9f0c-481a-a229-2c52ae24837e-kr8x4]: "my-hostname-basic-85cb6226-9f0c-481a-a229-2c52ae24837e-kr8x4", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:48:45.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3356" for this suite.

• [SLOW TEST:12.312 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":280,"completed":169,"skipped":2738,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:48:45.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:48:53.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-972" for this suite.

• [SLOW TEST:8.524 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":280,"completed":170,"skipped":2738,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:48:53.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 20 00:48:55.683: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 20 00:48:57.707: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756535, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756535, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756535, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756535, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:48:59.741: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756535, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756535, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756535, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756535, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:49:01.713: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756535, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756535, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756535, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756535, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 20 00:49:04.868: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:49:04.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3883" for this suite.
STEP: Destroying namespace "webhook-3883-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:11.290 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":280,"completed":171,"skipped":2763,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:49:05.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-59251468-50ef-4770-8f1f-3443464b6a31
STEP: Creating a pod to test consume configMaps
Feb 20 00:49:05.424: INFO: Waiting up to 5m0s for pod "pod-configmaps-de94cf02-c377-4b8a-bcb2-5acdfb3159d0" in namespace "configmap-6542" to be "success or failure"
Feb 20 00:49:05.565: INFO: Pod "pod-configmaps-de94cf02-c377-4b8a-bcb2-5acdfb3159d0": Phase="Pending", Reason="", readiness=false. Elapsed: 139.75445ms
Feb 20 00:49:07.571: INFO: Pod "pod-configmaps-de94cf02-c377-4b8a-bcb2-5acdfb3159d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14609336s
Feb 20 00:49:09.579: INFO: Pod "pod-configmaps-de94cf02-c377-4b8a-bcb2-5acdfb3159d0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153799717s
Feb 20 00:49:11.584: INFO: Pod "pod-configmaps-de94cf02-c377-4b8a-bcb2-5acdfb3159d0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.159582058s
Feb 20 00:49:13.592: INFO: Pod "pod-configmaps-de94cf02-c377-4b8a-bcb2-5acdfb3159d0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.166769235s
Feb 20 00:49:15.597: INFO: Pod "pod-configmaps-de94cf02-c377-4b8a-bcb2-5acdfb3159d0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.171735796s
Feb 20 00:49:17.728: INFO: Pod "pod-configmaps-de94cf02-c377-4b8a-bcb2-5acdfb3159d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.303107869s
STEP: Saw pod success
Feb 20 00:49:17.728: INFO: Pod "pod-configmaps-de94cf02-c377-4b8a-bcb2-5acdfb3159d0" satisfied condition "success or failure"
Feb 20 00:49:17.732: INFO: Trying to get logs from node jerma-node pod pod-configmaps-de94cf02-c377-4b8a-bcb2-5acdfb3159d0 container configmap-volume-test: 
STEP: delete the pod
Feb 20 00:49:17.912: INFO: Waiting for pod pod-configmaps-de94cf02-c377-4b8a-bcb2-5acdfb3159d0 to disappear
Feb 20 00:49:17.921: INFO: Pod pod-configmaps-de94cf02-c377-4b8a-bcb2-5acdfb3159d0 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:49:17.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6542" for this suite.

• [SLOW TEST:12.700 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":172,"skipped":2786,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:49:17.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Feb 20 00:49:18.138: INFO: Waiting up to 5m0s for pod "downward-api-a2cade65-3ffc-4859-ba58-894ecb8442a1" in namespace "downward-api-8454" to be "success or failure"
Feb 20 00:49:18.148: INFO: Pod "downward-api-a2cade65-3ffc-4859-ba58-894ecb8442a1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.551068ms
Feb 20 00:49:20.161: INFO: Pod "downward-api-a2cade65-3ffc-4859-ba58-894ecb8442a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022407339s
Feb 20 00:49:22.181: INFO: Pod "downward-api-a2cade65-3ffc-4859-ba58-894ecb8442a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042384965s
Feb 20 00:49:24.200: INFO: Pod "downward-api-a2cade65-3ffc-4859-ba58-894ecb8442a1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061070053s
Feb 20 00:49:26.207: INFO: Pod "downward-api-a2cade65-3ffc-4859-ba58-894ecb8442a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068649312s
STEP: Saw pod success
Feb 20 00:49:26.207: INFO: Pod "downward-api-a2cade65-3ffc-4859-ba58-894ecb8442a1" satisfied condition "success or failure"
Feb 20 00:49:26.211: INFO: Trying to get logs from node jerma-node pod downward-api-a2cade65-3ffc-4859-ba58-894ecb8442a1 container dapi-container: 
STEP: delete the pod
Feb 20 00:49:26.312: INFO: Waiting for pod downward-api-a2cade65-3ffc-4859-ba58-894ecb8442a1 to disappear
Feb 20 00:49:26.325: INFO: Pod downward-api-a2cade65-3ffc-4859-ba58-894ecb8442a1 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:49:26.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8454" for this suite.

• [SLOW TEST:8.384 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":280,"completed":173,"skipped":2863,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:49:26.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 20 00:49:26.572: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2af8de71-de99-4d9a-b51a-43c3fef831fb" in namespace "downward-api-7200" to be "success or failure"
Feb 20 00:49:26.597: INFO: Pod "downwardapi-volume-2af8de71-de99-4d9a-b51a-43c3fef831fb": Phase="Pending", Reason="", readiness=false. Elapsed: 25.016854ms
Feb 20 00:49:28.606: INFO: Pod "downwardapi-volume-2af8de71-de99-4d9a-b51a-43c3fef831fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033701724s
Feb 20 00:49:30.615: INFO: Pod "downwardapi-volume-2af8de71-de99-4d9a-b51a-43c3fef831fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042732036s
Feb 20 00:49:32.620: INFO: Pod "downwardapi-volume-2af8de71-de99-4d9a-b51a-43c3fef831fb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048259771s
Feb 20 00:49:34.664: INFO: Pod "downwardapi-volume-2af8de71-de99-4d9a-b51a-43c3fef831fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.091624236s
STEP: Saw pod success
Feb 20 00:49:34.664: INFO: Pod "downwardapi-volume-2af8de71-de99-4d9a-b51a-43c3fef831fb" satisfied condition "success or failure"
Feb 20 00:49:34.679: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-2af8de71-de99-4d9a-b51a-43c3fef831fb container client-container: 
STEP: delete the pod
Feb 20 00:49:34.762: INFO: Waiting for pod downwardapi-volume-2af8de71-de99-4d9a-b51a-43c3fef831fb to disappear
Feb 20 00:49:34.776: INFO: Pod downwardapi-volume-2af8de71-de99-4d9a-b51a-43c3fef831fb no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:49:34.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7200" for this suite.

• [SLOW TEST:8.548 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":174,"skipped":2898,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:49:34.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-map-2a870c0e-98b8-4f68-900b-e8a33a1009af
STEP: Creating a pod to test consume configMaps
Feb 20 00:49:35.102: INFO: Waiting up to 5m0s for pod "pod-configmaps-a220928f-3c49-4c82-9842-33040d66ce85" in namespace "configmap-4323" to be "success or failure"
Feb 20 00:49:35.129: INFO: Pod "pod-configmaps-a220928f-3c49-4c82-9842-33040d66ce85": Phase="Pending", Reason="", readiness=false. Elapsed: 26.158942ms
Feb 20 00:49:37.147: INFO: Pod "pod-configmaps-a220928f-3c49-4c82-9842-33040d66ce85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04472675s
Feb 20 00:49:39.152: INFO: Pod "pod-configmaps-a220928f-3c49-4c82-9842-33040d66ce85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049403941s
Feb 20 00:49:41.167: INFO: Pod "pod-configmaps-a220928f-3c49-4c82-9842-33040d66ce85": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064302685s
Feb 20 00:49:43.172: INFO: Pod "pod-configmaps-a220928f-3c49-4c82-9842-33040d66ce85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069362527s
STEP: Saw pod success
Feb 20 00:49:43.172: INFO: Pod "pod-configmaps-a220928f-3c49-4c82-9842-33040d66ce85" satisfied condition "success or failure"
Feb 20 00:49:43.175: INFO: Trying to get logs from node jerma-node pod pod-configmaps-a220928f-3c49-4c82-9842-33040d66ce85 container configmap-volume-test: 
STEP: delete the pod
Feb 20 00:49:43.211: INFO: Waiting for pod pod-configmaps-a220928f-3c49-4c82-9842-33040d66ce85 to disappear
Feb 20 00:49:43.222: INFO: Pod pod-configmaps-a220928f-3c49-4c82-9842-33040d66ce85 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:49:43.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4323" for this suite.

• [SLOW TEST:8.338 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":280,"completed":175,"skipped":2914,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:49:43.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0220 00:50:00.482018       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 20 00:50:00.482: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:50:00.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6239" for this suite.

• [SLOW TEST:17.269 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":280,"completed":176,"skipped":2940,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:50:00.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 20 00:50:02.921: INFO: Creating deployment "test-recreate-deployment"
Feb 20 00:50:02.982: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb 20 00:50:03.630: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Feb 20 00:50:05.672: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb 20 00:50:05.697: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756603, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756603, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756604, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756603, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:50:08.817: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756603, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756603, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756604, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756603, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:50:10.691: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756603, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756603, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756604, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756603, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:50:11.748: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756603, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756603, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756604, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756603, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:50:13.995: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756603, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756603, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756604, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756603, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:50:15.704: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756603, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756603, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756604, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756603, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:50:17.708: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756603, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756603, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756604, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756603, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:50:19.706: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756603, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756603, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756604, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756603, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:50:21.703: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb 20 00:50:21.716: INFO: Updating deployment test-recreate-deployment
Feb 20 00:50:21.716: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Feb 20 00:50:22.031: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-8948 /apis/apps/v1/namespaces/deployment-8948/deployments/test-recreate-deployment 9d5f1963-3e77-4a92-85c9-46f8ffb5b9ec 9508690 2 2020-02-20 00:50:02 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003832778  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-02-20 00:50:21 +0000 UTC,LastTransitionTime:2020-02-20 00:50:21 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-02-20 00:50:21 +0000 UTC,LastTransitionTime:2020-02-20 00:50:03 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Feb 20 00:50:22.037: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-8948 /apis/apps/v1/namespaces/deployment-8948/replicasets/test-recreate-deployment-5f94c574ff e170979f-8ada-49ae-8fce-36a29ccc5985 9508689 1 2020-02-20 00:50:21 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 9d5f1963-3e77-4a92-85c9-46f8ffb5b9ec 0xc003832ca7 0xc003832ca8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003832d48  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 20 00:50:22.037: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb 20 00:50:22.037: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-8948 /apis/apps/v1/namespaces/deployment-8948/replicasets/test-recreate-deployment-799c574856 dddfefc9-d8d4-4d5b-bb15-77b139fa984b 9508679 2 2020-02-20 00:50:02 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 9d5f1963-3e77-4a92-85c9-46f8ffb5b9ec 0xc003832df7 0xc003832df8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003832e88  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 20 00:50:22.084: INFO: Pod "test-recreate-deployment-5f94c574ff-wfhph" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-wfhph test-recreate-deployment-5f94c574ff- deployment-8948 /api/v1/namespaces/deployment-8948/pods/test-recreate-deployment-5f94c574ff-wfhph 71aa3d20-3bc0-4501-a196-ca6a7897bc72 9508691 0 2020-02-20 00:50:21 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff e170979f-8ada-49ae-8fce-36a29ccc5985 0xc003833347 0xc003833348}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rkzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rkzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rkzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:50:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:50:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:50:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-20 00:50:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-20 00:50:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:50:22.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8948" for this suite.

• [SLOW TEST:21.601 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":280,"completed":177,"skipped":2981,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:50:22.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: executing a command with run --rm and attach with stdin
Feb 20 00:50:22.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb 20 00:50:33.603: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0220 00:50:32.557260    2775 log.go:172] (0xc000515130) (0xc00070a140) Create stream\nI0220 00:50:32.557544    2775 log.go:172] (0xc000515130) (0xc00070a140) Stream added, broadcasting: 1\nI0220 00:50:32.561328    2775 log.go:172] (0xc000515130) Reply frame received for 1\nI0220 00:50:32.561410    2775 log.go:172] (0xc000515130) (0xc0005a7c20) Create stream\nI0220 00:50:32.561430    2775 log.go:172] (0xc000515130) (0xc0005a7c20) Stream added, broadcasting: 3\nI0220 00:50:32.562800    2775 log.go:172] (0xc000515130) Reply frame received for 3\nI0220 00:50:32.562834    2775 log.go:172] (0xc000515130) (0xc00070a1e0) Create stream\nI0220 00:50:32.562842    2775 log.go:172] (0xc000515130) (0xc00070a1e0) Stream added, broadcasting: 5\nI0220 00:50:32.565247    2775 log.go:172] (0xc000515130) Reply frame received for 5\nI0220 00:50:32.565370    2775 log.go:172] (0xc000515130) (0xc00074d4a0) Create stream\nI0220 00:50:32.565380    2775 log.go:172] (0xc000515130) (0xc00074d4a0) Stream added, broadcasting: 7\nI0220 00:50:32.567688    2775 log.go:172] (0xc000515130) Reply frame received for 7\nI0220 00:50:32.567986    2775 log.go:172] (0xc0005a7c20) (3) Writing data frame\nI0220 00:50:32.568173    2775 log.go:172] (0xc0005a7c20) (3) Writing data frame\nI0220 00:50:32.575921    2775 log.go:172] (0xc000515130) Data frame received for 5\nI0220 00:50:32.575951    2775 log.go:172] (0xc00070a1e0) (5) Data frame handling\nI0220 00:50:32.575978    2775 log.go:172] (0xc00070a1e0) (5) Data frame sent\nI0220 00:50:32.581576    2775 log.go:172] (0xc000515130) Data frame received for 5\nI0220 00:50:32.581593    2775 log.go:172] (0xc00070a1e0) (5) Data frame handling\nI0220 00:50:32.581601    2775 log.go:172] (0xc00070a1e0) (5) Data frame sent\nI0220 00:50:33.562425    2775 log.go:172] (0xc000515130) (0xc0005a7c20) Stream removed, broadcasting: 3\nI0220 00:50:33.562518    2775 log.go:172] (0xc000515130) Data frame received for 1\nI0220 00:50:33.562536    2775 log.go:172] (0xc00070a140) (1) Data frame handling\nI0220 00:50:33.562574    2775 log.go:172] (0xc00070a140) (1) Data frame sent\nI0220 00:50:33.562588    2775 log.go:172] (0xc000515130) (0xc00070a140) Stream removed, broadcasting: 1\nI0220 00:50:33.562910    2775 log.go:172] (0xc000515130) (0xc00070a1e0) Stream removed, broadcasting: 5\nI0220 00:50:33.562934    2775 log.go:172] (0xc000515130) (0xc00074d4a0) Stream removed, broadcasting: 7\nI0220 00:50:33.562950    2775 log.go:172] (0xc000515130) (0xc00070a140) Stream removed, broadcasting: 1\nI0220 00:50:33.562957    2775 log.go:172] (0xc000515130) (0xc0005a7c20) Stream removed, broadcasting: 3\nI0220 00:50:33.562962    2775 log.go:172] (0xc000515130) (0xc00070a1e0) Stream removed, broadcasting: 5\nI0220 00:50:33.562968    2775 log.go:172] (0xc000515130) (0xc00074d4a0) Stream removed, broadcasting: 7\nI0220 00:50:33.564965    2775 log.go:172] (0xc000515130) Go away received\n"
Feb 20 00:50:33.604: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:50:35.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3591" for this suite.

• [SLOW TEST:13.525 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1946
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job  [Conformance]","total":280,"completed":178,"skipped":2986,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:50:35.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-map-05666299-ec8d-4c17-989d-4517b1d6b316
STEP: Creating a pod to test consume configMaps
Feb 20 00:50:35.911: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f360f960-9d35-4b70-91a9-00bc27063565" in namespace "projected-5309" to be "success or failure"
Feb 20 00:50:35.917: INFO: Pod "pod-projected-configmaps-f360f960-9d35-4b70-91a9-00bc27063565": Phase="Pending", Reason="", readiness=false. Elapsed: 5.772145ms
Feb 20 00:50:38.031: INFO: Pod "pod-projected-configmaps-f360f960-9d35-4b70-91a9-00bc27063565": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119147981s
Feb 20 00:50:40.041: INFO: Pod "pod-projected-configmaps-f360f960-9d35-4b70-91a9-00bc27063565": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129305781s
Feb 20 00:50:42.669: INFO: Pod "pod-projected-configmaps-f360f960-9d35-4b70-91a9-00bc27063565": Phase="Pending", Reason="", readiness=false. Elapsed: 6.757762231s
Feb 20 00:50:44.693: INFO: Pod "pod-projected-configmaps-f360f960-9d35-4b70-91a9-00bc27063565": Phase="Pending", Reason="", readiness=false. Elapsed: 8.78199587s
Feb 20 00:50:46.700: INFO: Pod "pod-projected-configmaps-f360f960-9d35-4b70-91a9-00bc27063565": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.78862864s
STEP: Saw pod success
Feb 20 00:50:46.700: INFO: Pod "pod-projected-configmaps-f360f960-9d35-4b70-91a9-00bc27063565" satisfied condition "success or failure"
Feb 20 00:50:46.703: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-f360f960-9d35-4b70-91a9-00bc27063565 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 20 00:50:46.762: INFO: Waiting for pod pod-projected-configmaps-f360f960-9d35-4b70-91a9-00bc27063565 to disappear
Feb 20 00:50:46.775: INFO: Pod pod-projected-configmaps-f360f960-9d35-4b70-91a9-00bc27063565 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:50:46.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5309" for this suite.

• [SLOW TEST:11.151 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":280,"completed":179,"skipped":3018,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:50:46.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:50:46.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4741" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":280,"completed":180,"skipped":3029,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:50:47.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name cm-test-opt-del-ee9feaeb-9a62-4c23-9c6b-55e5da452a57
STEP: Creating configMap with name cm-test-opt-upd-c8e5ff99-bfd7-4258-bb26-99a8556dd3d3
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-ee9feaeb-9a62-4c23-9c6b-55e5da452a57
STEP: Updating configmap cm-test-opt-upd-c8e5ff99-bfd7-4258-bb26-99a8556dd3d3
STEP: Creating configMap with name cm-test-opt-create-e0b3e867-5639-40eb-98dd-5a96d33fbd92
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:52:10.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4398" for this suite.

• [SLOW TEST:83.362 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":181,"skipped":3040,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:52:10.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:52:26.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5088" for this suite.

• [SLOW TEST:16.433 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":280,"completed":182,"skipped":3049,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:52:26.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: set up a multi version CRD
Feb 20 00:52:27.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:52:42.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1134" for this suite.

• [SLOW TEST:16.034 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":280,"completed":183,"skipped":3056,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:52:42.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-644.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-644.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-644.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-644.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-644.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-644.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-644.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-644.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-644.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-644.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-644.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-644.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-644.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 136.2.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.2.136_udp@PTR;check="$$(dig +tcp +noall +answer +search 136.2.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.2.136_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-644.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-644.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-644.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-644.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-644.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-644.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-644.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-644.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-644.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-644.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-644.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-644.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-644.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 136.2.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.2.136_udp@PTR;check="$$(dig +tcp +noall +answer +search 136.2.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.2.136_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 20 00:52:55.286: INFO: Unable to read wheezy_udp@dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:52:55.292: INFO: Unable to read wheezy_tcp@dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:52:55.297: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:52:55.299: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:52:55.335: INFO: Unable to read jessie_udp@dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:52:55.337: INFO: Unable to read jessie_tcp@dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:52:55.340: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:52:55.344: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:52:55.359: INFO: Lookups using dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65 failed for: [wheezy_udp@dns-test-service.dns-644.svc.cluster.local wheezy_tcp@dns-test-service.dns-644.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-644.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-644.svc.cluster.local jessie_udp@dns-test-service.dns-644.svc.cluster.local jessie_tcp@dns-test-service.dns-644.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-644.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-644.svc.cluster.local]

Feb 20 00:53:00.372: INFO: Unable to read wheezy_udp@dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:00.379: INFO: Unable to read wheezy_tcp@dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:00.385: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:00.398: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:00.465: INFO: Unable to read jessie_udp@dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:00.477: INFO: Unable to read jessie_tcp@dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:00.533: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:00.542: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:00.579: INFO: Lookups using dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65 failed for: [wheezy_udp@dns-test-service.dns-644.svc.cluster.local wheezy_tcp@dns-test-service.dns-644.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-644.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-644.svc.cluster.local jessie_udp@dns-test-service.dns-644.svc.cluster.local jessie_tcp@dns-test-service.dns-644.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-644.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-644.svc.cluster.local]

Feb 20 00:53:05.374: INFO: Unable to read wheezy_udp@dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:05.383: INFO: Unable to read wheezy_tcp@dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:05.391: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:05.398: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:05.538: INFO: Unable to read jessie_udp@dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:05.546: INFO: Unable to read jessie_tcp@dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:05.551: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:05.556: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:05.582: INFO: Lookups using dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65 failed for: [wheezy_udp@dns-test-service.dns-644.svc.cluster.local wheezy_tcp@dns-test-service.dns-644.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-644.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-644.svc.cluster.local jessie_udp@dns-test-service.dns-644.svc.cluster.local jessie_tcp@dns-test-service.dns-644.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-644.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-644.svc.cluster.local]

Feb 20 00:53:10.372: INFO: Unable to read wheezy_udp@dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:10.392: INFO: Unable to read wheezy_tcp@dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:10.400: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:10.406: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:10.649: INFO: Unable to read jessie_udp@dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:10.654: INFO: Unable to read jessie_tcp@dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:10.659: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:10.663: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:10.705: INFO: Lookups using dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65 failed for: [wheezy_udp@dns-test-service.dns-644.svc.cluster.local wheezy_tcp@dns-test-service.dns-644.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-644.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-644.svc.cluster.local jessie_udp@dns-test-service.dns-644.svc.cluster.local jessie_tcp@dns-test-service.dns-644.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-644.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-644.svc.cluster.local]

Feb 20 00:53:15.372: INFO: Unable to read wheezy_udp@dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:15.379: INFO: Unable to read wheezy_tcp@dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:15.384: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:15.389: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:15.419: INFO: Unable to read jessie_udp@dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:15.422: INFO: Unable to read jessie_tcp@dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:15.426: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:15.430: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:15.448: INFO: Lookups using dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65 failed for: [wheezy_udp@dns-test-service.dns-644.svc.cluster.local wheezy_tcp@dns-test-service.dns-644.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-644.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-644.svc.cluster.local jessie_udp@dns-test-service.dns-644.svc.cluster.local jessie_tcp@dns-test-service.dns-644.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-644.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-644.svc.cluster.local]

Feb 20 00:53:20.378: INFO: Unable to read wheezy_udp@dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:20.400: INFO: Unable to read wheezy_tcp@dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:20.406: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:20.411: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:20.458: INFO: Unable to read jessie_udp@dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:20.466: INFO: Unable to read jessie_tcp@dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:20.480: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:20.488: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-644.svc.cluster.local from pod dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65: the server could not find the requested resource (get pods dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65)
Feb 20 00:53:20.543: INFO: Lookups using dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65 failed for: [wheezy_udp@dns-test-service.dns-644.svc.cluster.local wheezy_tcp@dns-test-service.dns-644.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-644.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-644.svc.cluster.local jessie_udp@dns-test-service.dns-644.svc.cluster.local jessie_tcp@dns-test-service.dns-644.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-644.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-644.svc.cluster.local]

Feb 20 00:53:25.457: INFO: DNS probes using dns-644/dns-test-e4b84604-6418-418f-ab13-bf4f0428cb65 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:53:25.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-644" for this suite.

• [SLOW TEST:42.919 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":280,"completed":184,"skipped":3067,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:53:25.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-53c98c9b-4bee-44e9-a2f8-5018c30180f5
STEP: Creating a pod to test consume configMaps
Feb 20 00:53:26.159: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c99f2ee5-2021-4e26-9758-f4b9c278be4d" in namespace "projected-1772" to be "success or failure"
Feb 20 00:53:26.165: INFO: Pod "pod-projected-configmaps-c99f2ee5-2021-4e26-9758-f4b9c278be4d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.378719ms
Feb 20 00:53:28.183: INFO: Pod "pod-projected-configmaps-c99f2ee5-2021-4e26-9758-f4b9c278be4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023804193s
Feb 20 00:53:30.200: INFO: Pod "pod-projected-configmaps-c99f2ee5-2021-4e26-9758-f4b9c278be4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04100973s
Feb 20 00:53:32.210: INFO: Pod "pod-projected-configmaps-c99f2ee5-2021-4e26-9758-f4b9c278be4d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051023874s
Feb 20 00:53:34.222: INFO: Pod "pod-projected-configmaps-c99f2ee5-2021-4e26-9758-f4b9c278be4d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062736173s
Feb 20 00:53:36.229: INFO: Pod "pod-projected-configmaps-c99f2ee5-2021-4e26-9758-f4b9c278be4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069696992s
STEP: Saw pod success
Feb 20 00:53:36.229: INFO: Pod "pod-projected-configmaps-c99f2ee5-2021-4e26-9758-f4b9c278be4d" satisfied condition "success or failure"
Feb 20 00:53:36.235: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-c99f2ee5-2021-4e26-9758-f4b9c278be4d container projected-configmap-volume-test: 
STEP: delete the pod
Feb 20 00:53:36.379: INFO: Waiting for pod pod-projected-configmaps-c99f2ee5-2021-4e26-9758-f4b9c278be4d to disappear
Feb 20 00:53:36.388: INFO: Pod pod-projected-configmaps-c99f2ee5-2021-4e26-9758-f4b9c278be4d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:53:36.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1772" for this suite.

• [SLOW TEST:10.554 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":280,"completed":185,"skipped":3090,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:53:36.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 20 00:53:36.557: INFO: Waiting up to 5m0s for pod "pod-aea911d0-1038-477f-b6e9-3ed2ca6b1885" in namespace "emptydir-4831" to be "success or failure"
Feb 20 00:53:36.565: INFO: Pod "pod-aea911d0-1038-477f-b6e9-3ed2ca6b1885": Phase="Pending", Reason="", readiness=false. Elapsed: 7.196626ms
Feb 20 00:53:38.588: INFO: Pod "pod-aea911d0-1038-477f-b6e9-3ed2ca6b1885": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030452642s
Feb 20 00:53:40.697: INFO: Pod "pod-aea911d0-1038-477f-b6e9-3ed2ca6b1885": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139743978s
Feb 20 00:53:42.702: INFO: Pod "pod-aea911d0-1038-477f-b6e9-3ed2ca6b1885": Phase="Pending", Reason="", readiness=false. Elapsed: 6.144335558s
Feb 20 00:53:44.711: INFO: Pod "pod-aea911d0-1038-477f-b6e9-3ed2ca6b1885": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.153388897s
STEP: Saw pod success
Feb 20 00:53:44.712: INFO: Pod "pod-aea911d0-1038-477f-b6e9-3ed2ca6b1885" satisfied condition "success or failure"
Feb 20 00:53:44.717: INFO: Trying to get logs from node jerma-node pod pod-aea911d0-1038-477f-b6e9-3ed2ca6b1885 container test-container: 
STEP: delete the pod
Feb 20 00:53:44.760: INFO: Waiting for pod pod-aea911d0-1038-477f-b6e9-3ed2ca6b1885 to disappear
Feb 20 00:53:44.813: INFO: Pod pod-aea911d0-1038-477f-b6e9-3ed2ca6b1885 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:53:44.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4831" for this suite.

• [SLOW TEST:8.417 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":186,"skipped":3093,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:53:44.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:54:01.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1666" for this suite.

• [SLOW TEST:16.547 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":280,"completed":187,"skipped":3144,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:54:01.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 20 00:54:02.465: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 20 00:54:04.514: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756842, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756842, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756842, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756842, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:54:06.529: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756842, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756842, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756842, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756842, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:54:08.526: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756842, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756842, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756842, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756842, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 20 00:54:11.572: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:54:11.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2320" for this suite.
STEP: Destroying namespace "webhook-2320-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:10.528 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":280,"completed":188,"skipped":3145,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:54:11.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service nodeport-service with the type=NodePort in namespace services-5389
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-5389
STEP: creating replication controller externalsvc in namespace services-5389
I0220 00:54:12.311240       9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-5389, replica count: 2
I0220 00:54:15.363368       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 00:54:18.364013       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 00:54:21.364899       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 00:54:24.365638       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Feb 20 00:54:24.446: INFO: Creating new exec pod
Feb 20 00:54:32.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5389 execpod6fvtj -- /bin/sh -x -c nslookup nodeport-service'
Feb 20 00:54:32.934: INFO: stderr: "I0220 00:54:32.749387    2798 log.go:172] (0xc000558630) (0xc000906280) Create stream\nI0220 00:54:32.749496    2798 log.go:172] (0xc000558630) (0xc000906280) Stream added, broadcasting: 1\nI0220 00:54:32.753073    2798 log.go:172] (0xc000558630) Reply frame received for 1\nI0220 00:54:32.753109    2798 log.go:172] (0xc000558630) (0xc000628820) Create stream\nI0220 00:54:32.753118    2798 log.go:172] (0xc000558630) (0xc000628820) Stream added, broadcasting: 3\nI0220 00:54:32.754755    2798 log.go:172] (0xc000558630) Reply frame received for 3\nI0220 00:54:32.754781    2798 log.go:172] (0xc000558630) (0xc0004e34a0) Create stream\nI0220 00:54:32.754792    2798 log.go:172] (0xc000558630) (0xc0004e34a0) Stream added, broadcasting: 5\nI0220 00:54:32.755846    2798 log.go:172] (0xc000558630) Reply frame received for 5\nI0220 00:54:32.831765    2798 log.go:172] (0xc000558630) Data frame received for 5\nI0220 00:54:32.831850    2798 log.go:172] (0xc0004e34a0) (5) Data frame handling\nI0220 00:54:32.831883    2798 log.go:172] (0xc0004e34a0) (5) Data frame sent\n+ nslookup nodeport-service\nI0220 00:54:32.843140    2798 log.go:172] (0xc000558630) Data frame received for 3\nI0220 00:54:32.843278    2798 log.go:172] (0xc000628820) (3) Data frame handling\nI0220 00:54:32.843361    2798 log.go:172] (0xc000628820) (3) Data frame sent\nI0220 00:54:32.844034    2798 log.go:172] (0xc000558630) Data frame received for 3\nI0220 00:54:32.844061    2798 log.go:172] (0xc000628820) (3) Data frame handling\nI0220 00:54:32.844073    2798 log.go:172] (0xc000628820) (3) Data frame sent\nI0220 00:54:32.925808    2798 log.go:172] (0xc000558630) Data frame received for 1\nI0220 00:54:32.925877    2798 log.go:172] (0xc000558630) (0xc0004e34a0) Stream removed, broadcasting: 5\nI0220 00:54:32.925927    2798 log.go:172] (0xc000906280) (1) Data frame handling\nI0220 00:54:32.925946    2798 log.go:172] (0xc000906280) (1) Data frame sent\nI0220 00:54:32.925974    2798 log.go:172] (0xc000558630) (0xc000628820) Stream removed, broadcasting: 3\nI0220 00:54:32.926003    2798 log.go:172] (0xc000558630) (0xc000906280) Stream removed, broadcasting: 1\nI0220 00:54:32.926019    2798 log.go:172] (0xc000558630) Go away received\nI0220 00:54:32.926455    2798 log.go:172] (0xc000558630) (0xc000906280) Stream removed, broadcasting: 1\nI0220 00:54:32.926465    2798 log.go:172] (0xc000558630) (0xc000628820) Stream removed, broadcasting: 3\nI0220 00:54:32.926469    2798 log.go:172] (0xc000558630) (0xc0004e34a0) Stream removed, broadcasting: 5\n"
Feb 20 00:54:32.935: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-5389.svc.cluster.local\tcanonical name = externalsvc.services-5389.svc.cluster.local.\nName:\texternalsvc.services-5389.svc.cluster.local\nAddress: 10.96.65.166\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-5389, will wait for the garbage collector to delete the pods
Feb 20 00:54:33.011: INFO: Deleting ReplicationController externalsvc took: 7.646135ms
Feb 20 00:54:33.312: INFO: Terminating ReplicationController externalsvc pods took: 300.421607ms
Feb 20 00:54:41.558: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:54:41.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5389" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:29.688 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":280,"completed":189,"skipped":3170,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:54:41.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: starting the proxy server
Feb 20 00:54:41.700: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:54:41.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8717" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":280,"completed":190,"skipped":3184,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:54:41.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Feb 20 00:54:41.860: INFO: >>> kubeConfig: /root/.kube/config
Feb 20 00:54:44.783: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:54:56.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4186" for this suite.

• [SLOW TEST:14.384 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":280,"completed":191,"skipped":3193,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:54:56.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6389.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6389.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 20 00:55:08.422: INFO: DNS probes using dns-6389/dns-test-bb714468-2089-4c78-800d-1c5c408998cf succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:55:08.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6389" for this suite.

• [SLOW TEST:12.365 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":280,"completed":192,"skipped":3216,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:55:08.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 20 00:55:08.779: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8e397c04-af3b-4850-80cb-45f5cae9f0ba" in namespace "downward-api-2421" to be "success or failure"
Feb 20 00:55:08.785: INFO: Pod "downwardapi-volume-8e397c04-af3b-4850-80cb-45f5cae9f0ba": Phase="Pending", Reason="", readiness=false. Elapsed: 5.293947ms
Feb 20 00:55:10.790: INFO: Pod "downwardapi-volume-8e397c04-af3b-4850-80cb-45f5cae9f0ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009852581s
Feb 20 00:55:12.797: INFO: Pod "downwardapi-volume-8e397c04-af3b-4850-80cb-45f5cae9f0ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016784022s
Feb 20 00:55:14.801: INFO: Pod "downwardapi-volume-8e397c04-af3b-4850-80cb-45f5cae9f0ba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021230383s
Feb 20 00:55:16.808: INFO: Pod "downwardapi-volume-8e397c04-af3b-4850-80cb-45f5cae9f0ba": Phase="Pending", Reason="", readiness=false. Elapsed: 8.028693455s
Feb 20 00:55:18.860: INFO: Pod "downwardapi-volume-8e397c04-af3b-4850-80cb-45f5cae9f0ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.080464619s
STEP: Saw pod success
Feb 20 00:55:18.860: INFO: Pod "downwardapi-volume-8e397c04-af3b-4850-80cb-45f5cae9f0ba" satisfied condition "success or failure"
Feb 20 00:55:18.866: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-8e397c04-af3b-4850-80cb-45f5cae9f0ba container client-container: 
STEP: delete the pod
Feb 20 00:55:18.918: INFO: Waiting for pod downwardapi-volume-8e397c04-af3b-4850-80cb-45f5cae9f0ba to disappear
Feb 20 00:55:18.952: INFO: Pod downwardapi-volume-8e397c04-af3b-4850-80cb-45f5cae9f0ba no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:55:18.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2421" for this suite.

• [SLOW TEST:10.455 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":280,"completed":193,"skipped":3216,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:55:19.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 20 00:55:20.094: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created
Feb 20 00:55:22.115: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756920, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756920, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756920, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756920, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:55:24.149: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756920, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756920, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756920, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756920, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:55:26.119: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756920, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756920, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756920, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717756920, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 20 00:55:29.187: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 20 00:55:29.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:55:30.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6302" for this suite.
STEP: Destroying namespace "webhook-6302-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:11.807 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":280,"completed":194,"skipped":3235,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:55:30.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7322.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-7322.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7322.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7322.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-7322.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7322.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 20 00:55:45.036: INFO: DNS probes using dns-7322/dns-test-9f82caf6-bf2e-435e-ba47-b1f9cd3679c5 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:55:45.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7322" for this suite.

• [SLOW TEST:14.603 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":280,"completed":195,"skipped":3237,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:55:45.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-84c7c0a0-73b8-4d33-94b8-50eb2514793c
STEP: Creating a pod to test consume configMaps
Feb 20 00:55:46.718: INFO: Waiting up to 5m0s for pod "pod-configmaps-33d31e70-9347-404b-860d-697b532086d5" in namespace "configmap-8848" to be "success or failure"
Feb 20 00:55:46.731: INFO: Pod "pod-configmaps-33d31e70-9347-404b-860d-697b532086d5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.813924ms
Feb 20 00:55:48.739: INFO: Pod "pod-configmaps-33d31e70-9347-404b-860d-697b532086d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020074524s
Feb 20 00:55:50.754: INFO: Pod "pod-configmaps-33d31e70-9347-404b-860d-697b532086d5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035445438s
Feb 20 00:55:52.764: INFO: Pod "pod-configmaps-33d31e70-9347-404b-860d-697b532086d5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045286994s
Feb 20 00:55:54.771: INFO: Pod "pod-configmaps-33d31e70-9347-404b-860d-697b532086d5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052541936s
Feb 20 00:55:56.784: INFO: Pod "pod-configmaps-33d31e70-9347-404b-860d-697b532086d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065180722s
STEP: Saw pod success
Feb 20 00:55:56.784: INFO: Pod "pod-configmaps-33d31e70-9347-404b-860d-697b532086d5" satisfied condition "success or failure"
Feb 20 00:55:56.789: INFO: Trying to get logs from node jerma-node pod pod-configmaps-33d31e70-9347-404b-860d-697b532086d5 container configmap-volume-test: 
STEP: delete the pod
Feb 20 00:55:56.840: INFO: Waiting for pod pod-configmaps-33d31e70-9347-404b-860d-697b532086d5 to disappear
Feb 20 00:55:56.871: INFO: Pod pod-configmaps-33d31e70-9347-404b-860d-697b532086d5 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:55:56.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8848" for this suite.

• [SLOW TEST:11.605 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":196,"skipped":3266,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:55:57.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-542
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating statefulset ss in namespace statefulset-542
Feb 20 00:55:57.157: INFO: Found 0 stateful pods, waiting for 1
Feb 20 00:56:07.166: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Feb 20 00:56:07.193: INFO: Deleting all statefulset in ns statefulset-542
Feb 20 00:56:07.200: INFO: Scaling statefulset ss to 0
Feb 20 00:56:27.379: INFO: Waiting for statefulset status.replicas updated to 0
Feb 20 00:56:27.385: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:56:27.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-542" for this suite.

• [SLOW TEST:30.423 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":280,"completed":197,"skipped":3286,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:56:27.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:56:40.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5994" for this suite.

• [SLOW TEST:13.302 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":280,"completed":198,"skipped":3309,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:56:40.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating service nodeport-test with type=NodePort in namespace services-869
STEP: creating replication controller nodeport-test in namespace services-869
I0220 00:56:40.970291       9 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-869, replica count: 2
I0220 00:56:44.021771       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 00:56:47.022338       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 00:56:50.022812       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 00:56:53.023344       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 20 00:56:53.023: INFO: Creating new exec pod
Feb 20 00:57:02.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-869 execpod6tmdp -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Feb 20 00:57:02.502: INFO: stderr: "I0220 00:57:02.304875    2829 log.go:172] (0xc00096e000) (0xc0007d6000) Create stream\nI0220 00:57:02.305222    2829 log.go:172] (0xc00096e000) (0xc0007d6000) Stream added, broadcasting: 1\nI0220 00:57:02.311901    2829 log.go:172] (0xc00096e000) Reply frame received for 1\nI0220 00:57:02.311987    2829 log.go:172] (0xc00096e000) (0xc0007d60a0) Create stream\nI0220 00:57:02.312002    2829 log.go:172] (0xc00096e000) (0xc0007d60a0) Stream added, broadcasting: 3\nI0220 00:57:02.316300    2829 log.go:172] (0xc00096e000) Reply frame received for 3\nI0220 00:57:02.316478    2829 log.go:172] (0xc00096e000) (0xc00070c460) Create stream\nI0220 00:57:02.316519    2829 log.go:172] (0xc00096e000) (0xc00070c460) Stream added, broadcasting: 5\nI0220 00:57:02.321287    2829 log.go:172] (0xc00096e000) Reply frame received for 5\nI0220 00:57:02.384058    2829 log.go:172] (0xc00096e000) Data frame received for 5\nI0220 00:57:02.384161    2829 log.go:172] (0xc00070c460) (5) Data frame handling\nI0220 00:57:02.384204    2829 log.go:172] (0xc00070c460) (5) Data frame sent\nI0220 00:57:02.384217    2829 log.go:172] (0xc00096e000) Data frame received for 5\nI0220 00:57:02.384224    2829 log.go:172] (0xc00070c460) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nI0220 00:57:02.384298    2829 log.go:172] (0xc00070c460) (5) Data frame sent\nI0220 00:57:02.390722    2829 log.go:172] (0xc00096e000) Data frame received for 5\nI0220 00:57:02.390773    2829 log.go:172] (0xc00070c460) (5) Data frame handling\nI0220 00:57:02.390796    2829 log.go:172] (0xc00070c460) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0220 00:57:02.485232    2829 log.go:172] (0xc00096e000) (0xc0007d60a0) Stream removed, broadcasting: 3\nI0220 00:57:02.485427    2829 log.go:172] (0xc00096e000) Data frame received for 1\nI0220 00:57:02.485449    2829 log.go:172] (0xc0007d6000) (1) Data frame handling\nI0220 00:57:02.485470    2829 log.go:172] (0xc0007d6000) (1) Data frame sent\nI0220 00:57:02.485488    2829 log.go:172] (0xc00096e000) (0xc0007d6000) Stream removed, broadcasting: 1\nI0220 00:57:02.486733    2829 log.go:172] (0xc00096e000) (0xc00070c460) Stream removed, broadcasting: 5\nI0220 00:57:02.486845    2829 log.go:172] (0xc00096e000) (0xc0007d6000) Stream removed, broadcasting: 1\nI0220 00:57:02.486865    2829 log.go:172] (0xc00096e000) (0xc0007d60a0) Stream removed, broadcasting: 3\nI0220 00:57:02.486874    2829 log.go:172] (0xc00096e000) (0xc00070c460) Stream removed, broadcasting: 5\nI0220 00:57:02.486888    2829 log.go:172] (0xc00096e000) Go away received\n"
Feb 20 00:57:02.502: INFO: stdout: ""
Feb 20 00:57:02.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-869 execpod6tmdp -- /bin/sh -x -c nc -zv -t -w 2 10.96.39.79 80'
Feb 20 00:57:02.826: INFO: stderr: "I0220 00:57:02.676548    2848 log.go:172] (0xc00093a370) (0xc00091a460) Create stream\nI0220 00:57:02.676639    2848 log.go:172] (0xc00093a370) (0xc00091a460) Stream added, broadcasting: 1\nI0220 00:57:02.683344    2848 log.go:172] (0xc00093a370) Reply frame received for 1\nI0220 00:57:02.683395    2848 log.go:172] (0xc00093a370) (0xc00091a000) Create stream\nI0220 00:57:02.683406    2848 log.go:172] (0xc00093a370) (0xc00091a000) Stream added, broadcasting: 3\nI0220 00:57:02.684732    2848 log.go:172] (0xc00093a370) Reply frame received for 3\nI0220 00:57:02.684836    2848 log.go:172] (0xc00093a370) (0xc0009b6000) Create stream\nI0220 00:57:02.684856    2848 log.go:172] (0xc00093a370) (0xc0009b6000) Stream added, broadcasting: 5\nI0220 00:57:02.686440    2848 log.go:172] (0xc00093a370) Reply frame received for 5\nI0220 00:57:02.748637    2848 log.go:172] (0xc00093a370) Data frame received for 5\nI0220 00:57:02.748713    2848 log.go:172] (0xc0009b6000) (5) Data frame handling\nI0220 00:57:02.748764    2848 log.go:172] (0xc0009b6000) (5) Data frame sent\nI0220 00:57:02.748793    2848 log.go:172] (0xc00093a370) Data frame received for 5\nI0220 00:57:02.748815    2848 log.go:172] (0xc0009b6000) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.39.79 80\nI0220 00:57:02.748912    2848 log.go:172] (0xc0009b6000) (5) Data frame sent\nI0220 00:57:02.750235    2848 log.go:172] (0xc00093a370) Data frame received for 5\nI0220 00:57:02.750250    2848 log.go:172] (0xc0009b6000) (5) Data frame handling\nI0220 00:57:02.750265    2848 log.go:172] (0xc0009b6000) (5) Data frame sent\nConnection to 10.96.39.79 80 port [tcp/http] succeeded!\nI0220 00:57:02.811702    2848 log.go:172] (0xc00093a370) Data frame received for 1\nI0220 00:57:02.811836    2848 log.go:172] (0xc00091a460) (1) Data frame handling\nI0220 00:57:02.811881    2848 log.go:172] (0xc00091a460) (1) Data frame sent\nI0220 00:57:02.811967    2848 log.go:172] (0xc00093a370) (0xc00091a460) Stream removed, broadcasting: 1\nI0220 00:57:02.812250    2848 log.go:172] (0xc00093a370) (0xc0009b6000) Stream removed, broadcasting: 5\nI0220 00:57:02.812288    2848 log.go:172] (0xc00093a370) (0xc00091a000) Stream removed, broadcasting: 3\nI0220 00:57:02.812318    2848 log.go:172] (0xc00093a370) Go away received\nI0220 00:57:02.812970    2848 log.go:172] (0xc00093a370) (0xc00091a460) Stream removed, broadcasting: 1\nI0220 00:57:02.813004    2848 log.go:172] (0xc00093a370) (0xc00091a000) Stream removed, broadcasting: 3\nI0220 00:57:02.813015    2848 log.go:172] (0xc00093a370) (0xc0009b6000) Stream removed, broadcasting: 5\n"
Feb 20 00:57:02.826: INFO: stdout: ""
Feb 20 00:57:02.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-869 execpod6tmdp -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 32638'
Feb 20 00:57:03.237: INFO: stderr: "I0220 00:57:03.051393    2868 log.go:172] (0xc000bea0b0) (0xc0009fc140) Create stream\nI0220 00:57:03.051605    2868 log.go:172] (0xc000bea0b0) (0xc0009fc140) Stream added, broadcasting: 1\nI0220 00:57:03.057379    2868 log.go:172] (0xc000bea0b0) Reply frame received for 1\nI0220 00:57:03.057542    2868 log.go:172] (0xc000bea0b0) (0xc00097e140) Create stream\nI0220 00:57:03.057617    2868 log.go:172] (0xc000bea0b0) (0xc00097e140) Stream added, broadcasting: 3\nI0220 00:57:03.062408    2868 log.go:172] (0xc000bea0b0) Reply frame received for 3\nI0220 00:57:03.062599    2868 log.go:172] (0xc000bea0b0) (0xc0009fc1e0) Create stream\nI0220 00:57:03.062640    2868 log.go:172] (0xc000bea0b0) (0xc0009fc1e0) Stream added, broadcasting: 5\nI0220 00:57:03.065085    2868 log.go:172] (0xc000bea0b0) Reply frame received for 5\nI0220 00:57:03.163790    2868 log.go:172] (0xc000bea0b0) Data frame received for 5\nI0220 00:57:03.163882    2868 log.go:172] (0xc0009fc1e0) (5) Data frame handling\nI0220 00:57:03.163910    2868 log.go:172] (0xc0009fc1e0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 32638\nI0220 00:57:03.168548    2868 log.go:172] (0xc000bea0b0) Data frame received for 5\nI0220 00:57:03.168573    2868 log.go:172] (0xc0009fc1e0) (5) Data frame handling\nI0220 00:57:03.168587    2868 log.go:172] (0xc0009fc1e0) (5) Data frame sent\nConnection to 10.96.2.250 32638 port [tcp/32638] succeeded!\nI0220 00:57:03.224775    2868 log.go:172] (0xc000bea0b0) Data frame received for 1\nI0220 00:57:03.224988    2868 log.go:172] (0xc000bea0b0) (0xc0009fc1e0) Stream removed, broadcasting: 5\nI0220 00:57:03.225043    2868 log.go:172] (0xc0009fc140) (1) Data frame handling\nI0220 00:57:03.225087    2868 log.go:172] (0xc0009fc140) (1) Data frame sent\nI0220 00:57:03.225173    2868 log.go:172] (0xc000bea0b0) (0xc00097e140) Stream removed, broadcasting: 3\nI0220 00:57:03.225229    2868 log.go:172] (0xc000bea0b0) (0xc0009fc140) Stream removed, broadcasting: 1\nI0220 00:57:03.225250    2868 log.go:172] (0xc000bea0b0) Go away received\nI0220 00:57:03.226074    2868 log.go:172] (0xc000bea0b0) (0xc0009fc140) Stream removed, broadcasting: 1\nI0220 00:57:03.226084    2868 log.go:172] (0xc000bea0b0) (0xc00097e140) Stream removed, broadcasting: 3\nI0220 00:57:03.226090    2868 log.go:172] (0xc000bea0b0) (0xc0009fc1e0) Stream removed, broadcasting: 5\n"
Feb 20 00:57:03.237: INFO: stdout: ""
Feb 20 00:57:03.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-869 execpod6tmdp -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 32638'
Feb 20 00:57:03.519: INFO: stderr: "I0220 00:57:03.376559    2888 log.go:172] (0xc000a160b0) (0xc0003a74a0) Create stream\nI0220 00:57:03.376777    2888 log.go:172] (0xc000a160b0) (0xc0003a74a0) Stream added, broadcasting: 1\nI0220 00:57:03.379839    2888 log.go:172] (0xc000a160b0) Reply frame received for 1\nI0220 00:57:03.379866    2888 log.go:172] (0xc000a160b0) (0xc000b4c000) Create stream\nI0220 00:57:03.379874    2888 log.go:172] (0xc000a160b0) (0xc000b4c000) Stream added, broadcasting: 3\nI0220 00:57:03.380738    2888 log.go:172] (0xc000a160b0) Reply frame received for 3\nI0220 00:57:03.380762    2888 log.go:172] (0xc000a160b0) (0xc0006d9b80) Create stream\nI0220 00:57:03.380770    2888 log.go:172] (0xc000a160b0) (0xc0006d9b80) Stream added, broadcasting: 5\nI0220 00:57:03.381949    2888 log.go:172] (0xc000a160b0) Reply frame received for 5\nI0220 00:57:03.448965    2888 log.go:172] (0xc000a160b0) Data frame received for 5\nI0220 00:57:03.449120    2888 log.go:172] (0xc0006d9b80) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.1.234 32638\nI0220 00:57:03.449160    2888 log.go:172] (0xc0006d9b80) (5) Data frame sent\nI0220 00:57:03.452386    2888 log.go:172] (0xc000a160b0) Data frame received for 5\nI0220 00:57:03.452470    2888 log.go:172] (0xc0006d9b80) (5) Data frame handling\nI0220 00:57:03.452523    2888 log.go:172] (0xc0006d9b80) (5) Data frame sent\nConnection to 10.96.1.234 32638 port [tcp/32638] succeeded!\nI0220 00:57:03.510776    2888 log.go:172] (0xc000a160b0) Data frame received for 1\nI0220 00:57:03.510873    2888 log.go:172] (0xc000a160b0) (0xc0006d9b80) Stream removed, broadcasting: 5\nI0220 00:57:03.510922    2888 log.go:172] (0xc0003a74a0) (1) Data frame handling\nI0220 00:57:03.510942    2888 log.go:172] (0xc0003a74a0) (1) Data frame sent\nI0220 00:57:03.510968    2888 log.go:172] (0xc000a160b0) (0xc000b4c000) Stream removed, broadcasting: 3\nI0220 00:57:03.510993    2888 log.go:172] (0xc000a160b0) (0xc0003a74a0) Stream removed, broadcasting: 1\nI0220 00:57:03.511004    2888 log.go:172] (0xc000a160b0) Go away received\nI0220 00:57:03.511731    2888 log.go:172] (0xc000a160b0) (0xc0003a74a0) Stream removed, broadcasting: 1\nI0220 00:57:03.511750    2888 log.go:172] (0xc000a160b0) (0xc000b4c000) Stream removed, broadcasting: 3\nI0220 00:57:03.511764    2888 log.go:172] (0xc000a160b0) (0xc0006d9b80) Stream removed, broadcasting: 5\n"
Feb 20 00:57:03.519: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:57:03.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-869" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:22.790 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":280,"completed":199,"skipped":3315,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:57:03.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 20 00:57:23.744: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 20 00:57:23.764: INFO: Pod pod-with-prestop-http-hook still exists
Feb 20 00:57:25.764: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 20 00:57:25.772: INFO: Pod pod-with-prestop-http-hook still exists
Feb 20 00:57:27.764: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 20 00:57:27.770: INFO: Pod pod-with-prestop-http-hook still exists
Feb 20 00:57:29.764: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 20 00:57:29.771: INFO: Pod pod-with-prestop-http-hook still exists
Feb 20 00:57:31.764: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 20 00:57:31.771: INFO: Pod pod-with-prestop-http-hook still exists
Feb 20 00:57:33.765: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 20 00:57:33.773: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:57:33.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4005" for this suite.

• [SLOW TEST:30.307 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":280,"completed":200,"skipped":3315,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:57:33.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 20 00:57:34.605: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 20 00:57:36.615: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757054, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757054, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757054, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757054, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:57:38.625: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757054, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757054, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757054, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757054, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:57:40.626: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757054, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757054, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757054, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757054, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 00:57:42.627: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757054, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757054, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757054, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757054, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 20 00:57:45.684: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
Feb 20 00:57:53.067: INFO: Waiting for webhook configuration to be ready...
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:57:58.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4262" for this suite.
STEP: Destroying namespace "webhook-4262-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:24.556 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":280,"completed":201,"skipped":3325,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:57:58.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:57:58.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2573" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":280,"completed":202,"skipped":3355,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:57:58.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Feb 20 00:57:58.673: INFO: PodSpec: initContainers in spec.initContainers
Feb 20 00:58:57.671: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-b7971dc7-3c1f-4384-9396-616889b8ff91", GenerateName:"", Namespace:"init-container-9815", SelfLink:"/api/v1/namespaces/init-container-9815/pods/pod-init-b7971dc7-3c1f-4384-9396-616889b8ff91", UID:"1c09d853-3eff-49ee-beb3-acb083dd0ff0", ResourceVersion:"9510918", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717757078, loc:(*time.Location)(0x7e52ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"672959547"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-rfg9z", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0064b8d80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rfg9z", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rfg9z", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rfg9z", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc005766058), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0026e6180), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0057660e0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005766100)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc005766108), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00576610c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757078, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757078, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757078, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757078, loc:(*time.Location)(0x7e52ca0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.250", PodIP:"10.44.0.1", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.1"}}, StartTime:(*v1.Time)(0xc0034aade0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000309dc0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000309e30)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://e3cf73582d4401442b59fb70c6083bdcacb5658d3b1699944c0538619862eae4", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0034aae20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0034aae00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc00576618f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:58:57.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9815" for this suite.

• [SLOW TEST:59.174 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":280,"completed":203,"skipped":3376,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:58:57.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap configmap-7317/configmap-test-96f5be0d-e475-46c7-9565-b08e99d34423
STEP: Creating a pod to test consume configMaps
Feb 20 00:58:57.930: INFO: Waiting up to 5m0s for pod "pod-configmaps-a2b125b1-e1a1-4016-be96-7aef2453d3eb" in namespace "configmap-7317" to be "success or failure"
Feb 20 00:58:57.961: INFO: Pod "pod-configmaps-a2b125b1-e1a1-4016-be96-7aef2453d3eb": Phase="Pending", Reason="", readiness=false. Elapsed: 31.198392ms
Feb 20 00:58:59.968: INFO: Pod "pod-configmaps-a2b125b1-e1a1-4016-be96-7aef2453d3eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038061898s
Feb 20 00:59:01.977: INFO: Pod "pod-configmaps-a2b125b1-e1a1-4016-be96-7aef2453d3eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047364921s
Feb 20 00:59:03.986: INFO: Pod "pod-configmaps-a2b125b1-e1a1-4016-be96-7aef2453d3eb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05631293s
Feb 20 00:59:05.993: INFO: Pod "pod-configmaps-a2b125b1-e1a1-4016-be96-7aef2453d3eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06277187s
STEP: Saw pod success
Feb 20 00:59:05.993: INFO: Pod "pod-configmaps-a2b125b1-e1a1-4016-be96-7aef2453d3eb" satisfied condition "success or failure"
Feb 20 00:59:05.997: INFO: Trying to get logs from node jerma-node pod pod-configmaps-a2b125b1-e1a1-4016-be96-7aef2453d3eb container env-test: 
STEP: delete the pod
Feb 20 00:59:06.401: INFO: Waiting for pod pod-configmaps-a2b125b1-e1a1-4016-be96-7aef2453d3eb to disappear
Feb 20 00:59:06.414: INFO: Pod pod-configmaps-a2b125b1-e1a1-4016-be96-7aef2453d3eb no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:59:06.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7317" for this suite.

• [SLOW TEST:8.666 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":280,"completed":204,"skipped":3388,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:59:06.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 20 00:59:06.661: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e74d4cc8-91d0-4139-8cbc-d0066ea2846f" in namespace "downward-api-9330" to be "success or failure"
Feb 20 00:59:06.730: INFO: Pod "downwardapi-volume-e74d4cc8-91d0-4139-8cbc-d0066ea2846f": Phase="Pending", Reason="", readiness=false. Elapsed: 68.820251ms
Feb 20 00:59:08.739: INFO: Pod "downwardapi-volume-e74d4cc8-91d0-4139-8cbc-d0066ea2846f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077227678s
Feb 20 00:59:10.750: INFO: Pod "downwardapi-volume-e74d4cc8-91d0-4139-8cbc-d0066ea2846f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088856611s
Feb 20 00:59:12.757: INFO: Pod "downwardapi-volume-e74d4cc8-91d0-4139-8cbc-d0066ea2846f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095478318s
Feb 20 00:59:14.768: INFO: Pod "downwardapi-volume-e74d4cc8-91d0-4139-8cbc-d0066ea2846f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.106537873s
STEP: Saw pod success
Feb 20 00:59:14.769: INFO: Pod "downwardapi-volume-e74d4cc8-91d0-4139-8cbc-d0066ea2846f" satisfied condition "success or failure"
Feb 20 00:59:14.775: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-e74d4cc8-91d0-4139-8cbc-d0066ea2846f container client-container: 
STEP: delete the pod
Feb 20 00:59:14.814: INFO: Waiting for pod downwardapi-volume-e74d4cc8-91d0-4139-8cbc-d0066ea2846f to disappear
Feb 20 00:59:14.857: INFO: Pod downwardapi-volume-e74d4cc8-91d0-4139-8cbc-d0066ea2846f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 00:59:14.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9330" for this suite.

• [SLOW TEST:8.439 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":205,"skipped":3406,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 00:59:14.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod liveness-f6a7e2a4-edc4-4b27-99df-93ce203b181d in namespace container-probe-3004
Feb 20 00:59:23.234: INFO: Started pod liveness-f6a7e2a4-edc4-4b27-99df-93ce203b181d in namespace container-probe-3004
STEP: checking the pod's current state and verifying that restartCount is present
Feb 20 00:59:23.241: INFO: Initial restart count of pod liveness-f6a7e2a4-edc4-4b27-99df-93ce203b181d is 0
Feb 20 00:59:35.305: INFO: Restart count of pod container-probe-3004/liveness-f6a7e2a4-edc4-4b27-99df-93ce203b181d is now 1 (12.064648417s elapsed)
Feb 20 00:59:55.418: INFO: Restart count of pod container-probe-3004/liveness-f6a7e2a4-edc4-4b27-99df-93ce203b181d is now 2 (32.176953791s elapsed)
Feb 20 01:00:15.542: INFO: Restart count of pod container-probe-3004/liveness-f6a7e2a4-edc4-4b27-99df-93ce203b181d is now 3 (52.301265332s elapsed)
Feb 20 01:00:35.663: INFO: Restart count of pod container-probe-3004/liveness-f6a7e2a4-edc4-4b27-99df-93ce203b181d is now 4 (1m12.422245578s elapsed)
Feb 20 01:01:35.989: INFO: Restart count of pod container-probe-3004/liveness-f6a7e2a4-edc4-4b27-99df-93ce203b181d is now 5 (2m12.748074252s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:01:36.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3004" for this suite.

• [SLOW TEST:141.168 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":280,"completed":206,"skipped":3432,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:01:36.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0220 01:01:46.297405       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 20 01:01:46.297: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:01:46.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1918" for this suite.

• [SLOW TEST:10.272 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":280,"completed":207,"skipped":3451,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:01:46.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir volume type on node default medium
Feb 20 01:01:48.981: INFO: Waiting up to 5m0s for pod "pod-4a46b56e-fb10-4bb8-94a3-85f521689950" in namespace "emptydir-1932" to be "success or failure"
Feb 20 01:01:49.024: INFO: Pod "pod-4a46b56e-fb10-4bb8-94a3-85f521689950": Phase="Pending", Reason="", readiness=false. Elapsed: 42.601654ms
Feb 20 01:01:51.034: INFO: Pod "pod-4a46b56e-fb10-4bb8-94a3-85f521689950": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052552085s
Feb 20 01:01:53.043: INFO: Pod "pod-4a46b56e-fb10-4bb8-94a3-85f521689950": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061674024s
Feb 20 01:01:55.058: INFO: Pod "pod-4a46b56e-fb10-4bb8-94a3-85f521689950": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07681255s
Feb 20 01:01:57.070: INFO: Pod "pod-4a46b56e-fb10-4bb8-94a3-85f521689950": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.088731065s
STEP: Saw pod success
Feb 20 01:01:57.070: INFO: Pod "pod-4a46b56e-fb10-4bb8-94a3-85f521689950" satisfied condition "success or failure"
Feb 20 01:01:57.073: INFO: Trying to get logs from node jerma-node pod pod-4a46b56e-fb10-4bb8-94a3-85f521689950 container test-container: 
STEP: delete the pod
Feb 20 01:01:57.112: INFO: Waiting for pod pod-4a46b56e-fb10-4bb8-94a3-85f521689950 to disappear
Feb 20 01:01:57.115: INFO: Pod pod-4a46b56e-fb10-4bb8-94a3-85f521689950 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:01:57.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1932" for this suite.

• [SLOW TEST:10.895 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":208,"skipped":3472,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:01:57.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:02:05.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9794" for this suite.

• [SLOW TEST:8.580 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":280,"completed":209,"skipped":3492,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:02:05.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-5d4d18ec-b7a3-4c07-8443-10b977f9a090
STEP: Creating a pod to test consume secrets
Feb 20 01:02:06.013: INFO: Waiting up to 5m0s for pod "pod-secrets-bad4333f-461e-4e28-979e-c309b4c904be" in namespace "secrets-4127" to be "success or failure"
Feb 20 01:02:06.027: INFO: Pod "pod-secrets-bad4333f-461e-4e28-979e-c309b4c904be": Phase="Pending", Reason="", readiness=false. Elapsed: 13.796152ms
Feb 20 01:02:08.046: INFO: Pod "pod-secrets-bad4333f-461e-4e28-979e-c309b4c904be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032609668s
Feb 20 01:02:10.053: INFO: Pod "pod-secrets-bad4333f-461e-4e28-979e-c309b4c904be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039537581s
Feb 20 01:02:12.097: INFO: Pod "pod-secrets-bad4333f-461e-4e28-979e-c309b4c904be": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084096856s
Feb 20 01:02:14.106: INFO: Pod "pod-secrets-bad4333f-461e-4e28-979e-c309b4c904be": Phase="Pending", Reason="", readiness=false. Elapsed: 8.092806716s
Feb 20 01:02:16.111: INFO: Pod "pod-secrets-bad4333f-461e-4e28-979e-c309b4c904be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.097884865s
STEP: Saw pod success
Feb 20 01:02:16.111: INFO: Pod "pod-secrets-bad4333f-461e-4e28-979e-c309b4c904be" satisfied condition "success or failure"
Feb 20 01:02:16.114: INFO: Trying to get logs from node jerma-node pod pod-secrets-bad4333f-461e-4e28-979e-c309b4c904be container secret-env-test: 
STEP: delete the pod
Feb 20 01:02:16.458: INFO: Waiting for pod pod-secrets-bad4333f-461e-4e28-979e-c309b4c904be to disappear
Feb 20 01:02:16.467: INFO: Pod pod-secrets-bad4333f-461e-4e28-979e-c309b4c904be no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:02:16.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4127" for this suite.

• [SLOW TEST:10.686 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:34
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":280,"completed":210,"skipped":3507,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:02:16.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: fetching services
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:02:16.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6738" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":280,"completed":211,"skipped":3512,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:02:16.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb 20 01:02:16.937: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5082 /api/v1/namespaces/watch-5082/configmaps/e2e-watch-test-configmap-a 97548cbf-1298-475f-9049-d380128e78b6 9511595 0 2020-02-20 01:02:16 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 20 01:02:16.938: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5082 /api/v1/namespaces/watch-5082/configmaps/e2e-watch-test-configmap-a 97548cbf-1298-475f-9049-d380128e78b6 9511595 0 2020-02-20 01:02:16 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb 20 01:02:26.951: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5082 /api/v1/namespaces/watch-5082/configmaps/e2e-watch-test-configmap-a 97548cbf-1298-475f-9049-d380128e78b6 9511634 0 2020-02-20 01:02:16 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 20 01:02:26.952: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5082 /api/v1/namespaces/watch-5082/configmaps/e2e-watch-test-configmap-a 97548cbf-1298-475f-9049-d380128e78b6 9511634 0 2020-02-20 01:02:16 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb 20 01:02:36.963: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5082 /api/v1/namespaces/watch-5082/configmaps/e2e-watch-test-configmap-a 97548cbf-1298-475f-9049-d380128e78b6 9511658 0 2020-02-20 01:02:16 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 20 01:02:36.964: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5082 /api/v1/namespaces/watch-5082/configmaps/e2e-watch-test-configmap-a 97548cbf-1298-475f-9049-d380128e78b6 9511658 0 2020-02-20 01:02:16 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb 20 01:02:46.982: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5082 /api/v1/namespaces/watch-5082/configmaps/e2e-watch-test-configmap-a 97548cbf-1298-475f-9049-d380128e78b6 9511682 0 2020-02-20 01:02:16 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 20 01:02:46.982: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5082 /api/v1/namespaces/watch-5082/configmaps/e2e-watch-test-configmap-a 97548cbf-1298-475f-9049-d380128e78b6 9511682 0 2020-02-20 01:02:16 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb 20 01:02:56.990: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-5082 /api/v1/namespaces/watch-5082/configmaps/e2e-watch-test-configmap-b 2c0267a7-c5bd-44c8-943d-c826277e85cb 9511706 0 2020-02-20 01:02:56 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 20 01:02:56.991: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-5082 /api/v1/namespaces/watch-5082/configmaps/e2e-watch-test-configmap-b 2c0267a7-c5bd-44c8-943d-c826277e85cb 9511706 0 2020-02-20 01:02:56 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb 20 01:03:06.998: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-5082 /api/v1/namespaces/watch-5082/configmaps/e2e-watch-test-configmap-b 2c0267a7-c5bd-44c8-943d-c826277e85cb 9511730 0 2020-02-20 01:02:56 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 20 01:03:06.998: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-5082 /api/v1/namespaces/watch-5082/configmaps/e2e-watch-test-configmap-b 2c0267a7-c5bd-44c8-943d-c826277e85cb 9511730 0 2020-02-20 01:02:56 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:03:16.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5082" for this suite.

• [SLOW TEST:60.280 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":280,"completed":212,"skipped":3533,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:03:17.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 20 01:03:33.277: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 20 01:03:33.285: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 20 01:03:35.285: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 20 01:03:35.293: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 20 01:03:37.285: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 20 01:03:37.293: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 20 01:03:39.285: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 20 01:03:39.292: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 20 01:03:41.285: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 20 01:03:41.298: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 20 01:03:43.285: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 20 01:03:43.292: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:03:43.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4062" for this suite.

• [SLOW TEST:26.291 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":280,"completed":213,"skipped":3549,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:03:43.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 20 01:03:43.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-9712
I0220 01:03:43.453731       9 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9712, replica count: 1
I0220 01:03:44.505089       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 01:03:45.505759       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 01:03:46.506418       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 01:03:47.507286       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 01:03:48.508560       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 01:03:49.509265       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 01:03:50.511672       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 01:03:51.513018       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 20 01:03:51.629: INFO: Created: latency-svc-t6xgz
Feb 20 01:03:51.637: INFO: Got endpoints: latency-svc-t6xgz [24.291704ms]
Feb 20 01:03:51.698: INFO: Created: latency-svc-xwwgg
Feb 20 01:03:51.776: INFO: Got endpoints: latency-svc-xwwgg [138.170317ms]
Feb 20 01:03:51.783: INFO: Created: latency-svc-8cbrs
Feb 20 01:03:51.820: INFO: Got endpoints: latency-svc-8cbrs [180.915278ms]
Feb 20 01:03:51.870: INFO: Created: latency-svc-mqb5b
Feb 20 01:03:51.938: INFO: Got endpoints: latency-svc-mqb5b [298.424673ms]
Feb 20 01:03:52.071: INFO: Created: latency-svc-4gt98
Feb 20 01:03:52.074: INFO: Got endpoints: latency-svc-4gt98 [435.459079ms]
Feb 20 01:03:52.109: INFO: Created: latency-svc-kzmjz
Feb 20 01:03:52.119: INFO: Got endpoints: latency-svc-kzmjz [479.586154ms]
Feb 20 01:03:52.241: INFO: Created: latency-svc-v6qht
Feb 20 01:03:52.273: INFO: Got endpoints: latency-svc-v6qht [633.16409ms]
Feb 20 01:03:52.277: INFO: Created: latency-svc-bvzwz
Feb 20 01:03:52.293: INFO: Got endpoints: latency-svc-bvzwz [653.479518ms]
Feb 20 01:03:52.405: INFO: Created: latency-svc-nj7jh
Feb 20 01:03:52.405: INFO: Got endpoints: latency-svc-nj7jh [765.834456ms]
Feb 20 01:03:52.440: INFO: Created: latency-svc-9tvkp
Feb 20 01:03:52.445: INFO: Got endpoints: latency-svc-9tvkp [805.669286ms]
Feb 20 01:03:52.553: INFO: Created: latency-svc-n9zpp
Feb 20 01:03:52.564: INFO: Got endpoints: latency-svc-n9zpp [924.317879ms]
Feb 20 01:03:52.594: INFO: Created: latency-svc-8vrmg
Feb 20 01:03:52.630: INFO: Got endpoints: latency-svc-8vrmg [991.061268ms]
Feb 20 01:03:52.635: INFO: Created: latency-svc-k5bw8
Feb 20 01:03:52.639: INFO: Got endpoints: latency-svc-k5bw8 [999.720823ms]
Feb 20 01:03:52.705: INFO: Created: latency-svc-hldbr
Feb 20 01:03:52.720: INFO: Got endpoints: latency-svc-hldbr [1.08071267s]
Feb 20 01:03:52.737: INFO: Created: latency-svc-q4th6
Feb 20 01:03:52.741: INFO: Got endpoints: latency-svc-q4th6 [1.101839114s]
Feb 20 01:03:52.778: INFO: Created: latency-svc-m8ztn
Feb 20 01:03:52.794: INFO: Got endpoints: latency-svc-m8ztn [1.154594626s]
Feb 20 01:03:52.795: INFO: Created: latency-svc-kjcjn
Feb 20 01:03:52.801: INFO: Got endpoints: latency-svc-kjcjn [1.024024426s]
Feb 20 01:03:52.844: INFO: Created: latency-svc-cbvjb
Feb 20 01:03:52.855: INFO: Got endpoints: latency-svc-cbvjb [1.034463105s]
Feb 20 01:03:52.887: INFO: Created: latency-svc-8s9p5
Feb 20 01:03:52.911: INFO: Got endpoints: latency-svc-8s9p5 [973.477561ms]
Feb 20 01:03:52.944: INFO: Created: latency-svc-xnlpf
Feb 20 01:03:53.057: INFO: Got endpoints: latency-svc-xnlpf [982.113583ms]
Feb 20 01:03:53.104: INFO: Created: latency-svc-6z4d4
Feb 20 01:03:53.109: INFO: Got endpoints: latency-svc-6z4d4 [989.722506ms]
Feb 20 01:03:53.204: INFO: Created: latency-svc-xjh4w
Feb 20 01:03:53.234: INFO: Got endpoints: latency-svc-xjh4w [960.826195ms]
Feb 20 01:03:53.241: INFO: Created: latency-svc-sjqxx
Feb 20 01:03:53.256: INFO: Got endpoints: latency-svc-sjqxx [962.639683ms]
Feb 20 01:03:53.266: INFO: Created: latency-svc-tvlrn
Feb 20 01:03:53.266: INFO: Got endpoints: latency-svc-tvlrn [860.659973ms]
Feb 20 01:03:53.290: INFO: Created: latency-svc-dhtkp
Feb 20 01:03:53.293: INFO: Got endpoints: latency-svc-dhtkp [847.274283ms]
Feb 20 01:03:53.377: INFO: Created: latency-svc-tdx9l
Feb 20 01:03:53.390: INFO: Got endpoints: latency-svc-tdx9l [826.073498ms]
Feb 20 01:03:53.413: INFO: Created: latency-svc-hxxx7
Feb 20 01:03:53.419: INFO: Got endpoints: latency-svc-hxxx7 [788.179758ms]
Feb 20 01:03:53.449: INFO: Created: latency-svc-lctcv
Feb 20 01:03:53.451: INFO: Got endpoints: latency-svc-lctcv [811.203954ms]
Feb 20 01:03:53.541: INFO: Created: latency-svc-lnxm7
Feb 20 01:03:53.553: INFO: Got endpoints: latency-svc-lnxm7 [832.615292ms]
Feb 20 01:03:53.569: INFO: Created: latency-svc-rwvsd
Feb 20 01:03:53.574: INFO: Got endpoints: latency-svc-rwvsd [832.124178ms]
Feb 20 01:03:53.614: INFO: Created: latency-svc-48hh8
Feb 20 01:03:53.635: INFO: Created: latency-svc-cg4qs
Feb 20 01:03:53.637: INFO: Got endpoints: latency-svc-48hh8 [843.177342ms]
Feb 20 01:03:53.674: INFO: Got endpoints: latency-svc-cg4qs [872.733825ms]
Feb 20 01:03:53.709: INFO: Created: latency-svc-gh8k8
Feb 20 01:03:53.732: INFO: Got endpoints: latency-svc-gh8k8 [876.8459ms]
Feb 20 01:03:53.736: INFO: Created: latency-svc-r78m9
Feb 20 01:03:53.751: INFO: Got endpoints: latency-svc-r78m9 [838.889163ms]
Feb 20 01:03:53.776: INFO: Created: latency-svc-7z756
Feb 20 01:03:53.854: INFO: Got endpoints: latency-svc-7z756 [797.112883ms]
Feb 20 01:03:53.866: INFO: Created: latency-svc-9tztl
Feb 20 01:03:54.245: INFO: Got endpoints: latency-svc-9tztl [1.135461335s]
Feb 20 01:03:54.257: INFO: Created: latency-svc-ltdkw
Feb 20 01:03:54.270: INFO: Got endpoints: latency-svc-ltdkw [1.036356893s]
Feb 20 01:03:54.294: INFO: Created: latency-svc-gzlkl
Feb 20 01:03:54.334: INFO: Got endpoints: latency-svc-gzlkl [1.077294039s]
Feb 20 01:03:54.436: INFO: Created: latency-svc-zkxmm
Feb 20 01:03:54.444: INFO: Got endpoints: latency-svc-zkxmm [1.178604873s]
Feb 20 01:03:54.479: INFO: Created: latency-svc-fwbr5
Feb 20 01:03:54.508: INFO: Got endpoints: latency-svc-fwbr5 [1.214958523s]
Feb 20 01:03:54.572: INFO: Created: latency-svc-wpml6
Feb 20 01:03:54.576: INFO: Got endpoints: latency-svc-wpml6 [1.185950785s]
Feb 20 01:03:54.604: INFO: Created: latency-svc-mqlwj
Feb 20 01:03:54.629: INFO: Got endpoints: latency-svc-mqlwj [1.2095835s]
Feb 20 01:03:54.633: INFO: Created: latency-svc-6k7nj
Feb 20 01:03:54.654: INFO: Got endpoints: latency-svc-6k7nj [1.203327599s]
Feb 20 01:03:54.738: INFO: Created: latency-svc-2t8qq
Feb 20 01:03:54.755: INFO: Got endpoints: latency-svc-2t8qq [1.201774171s]
Feb 20 01:03:54.758: INFO: Created: latency-svc-wx4rj
Feb 20 01:03:54.763: INFO: Got endpoints: latency-svc-wx4rj [1.188984886s]
Feb 20 01:03:54.791: INFO: Created: latency-svc-28gnp
Feb 20 01:03:54.809: INFO: Got endpoints: latency-svc-28gnp [1.171686762s]
Feb 20 01:03:54.828: INFO: Created: latency-svc-kzdlc
Feb 20 01:03:54.832: INFO: Got endpoints: latency-svc-kzdlc [1.157860909s]
Feb 20 01:03:54.894: INFO: Created: latency-svc-x6crq
Feb 20 01:03:54.919: INFO: Got endpoints: latency-svc-x6crq [110.357057ms]
Feb 20 01:03:54.921: INFO: Created: latency-svc-bfgxf
Feb 20 01:03:54.926: INFO: Got endpoints: latency-svc-bfgxf [1.193370199s]
Feb 20 01:03:54.973: INFO: Created: latency-svc-85mrb
Feb 20 01:03:54.981: INFO: Got endpoints: latency-svc-85mrb [1.230204661s]
Feb 20 01:03:55.059: INFO: Created: latency-svc-h4ccs
Feb 20 01:03:55.062: INFO: Got endpoints: latency-svc-h4ccs [1.208186918s]
Feb 20 01:03:55.100: INFO: Created: latency-svc-5x44x
Feb 20 01:03:55.109: INFO: Got endpoints: latency-svc-5x44x [863.99732ms]
Feb 20 01:03:55.241: INFO: Created: latency-svc-rlmpt
Feb 20 01:03:55.249: INFO: Got endpoints: latency-svc-rlmpt [978.868814ms]
Feb 20 01:03:55.277: INFO: Created: latency-svc-f79hf
Feb 20 01:03:55.280: INFO: Got endpoints: latency-svc-f79hf [945.475645ms]
Feb 20 01:03:55.325: INFO: Created: latency-svc-sjmrj
Feb 20 01:03:55.405: INFO: Got endpoints: latency-svc-sjmrj [960.010635ms]
Feb 20 01:03:55.409: INFO: Created: latency-svc-8ch6p
Feb 20 01:03:55.426: INFO: Got endpoints: latency-svc-8ch6p [917.644661ms]
Feb 20 01:03:55.452: INFO: Created: latency-svc-rwm84
Feb 20 01:03:55.456: INFO: Got endpoints: latency-svc-rwm84 [879.143027ms]
Feb 20 01:03:55.514: INFO: Created: latency-svc-ndt9k
Feb 20 01:03:55.542: INFO: Created: latency-svc-mcmv9
Feb 20 01:03:55.543: INFO: Got endpoints: latency-svc-ndt9k [914.389933ms]
Feb 20 01:03:55.558: INFO: Got endpoints: latency-svc-mcmv9 [903.757957ms]
Feb 20 01:03:55.579: INFO: Created: latency-svc-fm7tb
Feb 20 01:03:55.591: INFO: Got endpoints: latency-svc-fm7tb [836.551494ms]
Feb 20 01:03:55.683: INFO: Created: latency-svc-8654k
Feb 20 01:03:55.718: INFO: Got endpoints: latency-svc-8654k [954.771396ms]
Feb 20 01:03:55.722: INFO: Created: latency-svc-v2mml
Feb 20 01:03:55.747: INFO: Got endpoints: latency-svc-v2mml [915.262824ms]
Feb 20 01:03:55.841: INFO: Created: latency-svc-pljcf
Feb 20 01:03:55.887: INFO: Got endpoints: latency-svc-pljcf [967.570359ms]
Feb 20 01:03:55.889: INFO: Created: latency-svc-g2plt
Feb 20 01:03:55.900: INFO: Got endpoints: latency-svc-g2plt [974.345672ms]
Feb 20 01:03:56.043: INFO: Created: latency-svc-q6bsd
Feb 20 01:03:56.054: INFO: Got endpoints: latency-svc-q6bsd [1.072357928s]
Feb 20 01:03:56.119: INFO: Created: latency-svc-kl828
Feb 20 01:03:56.189: INFO: Created: latency-svc-s5sh6
Feb 20 01:03:56.191: INFO: Got endpoints: latency-svc-kl828 [1.128969272s]
Feb 20 01:03:56.195: INFO: Got endpoints: latency-svc-s5sh6 [1.086240136s]
Feb 20 01:03:56.219: INFO: Created: latency-svc-fkvsf
Feb 20 01:03:56.237: INFO: Got endpoints: latency-svc-fkvsf [987.833103ms]
Feb 20 01:03:56.241: INFO: Created: latency-svc-gd494
Feb 20 01:03:56.353: INFO: Created: latency-svc-qskbx
Feb 20 01:03:56.356: INFO: Got endpoints: latency-svc-gd494 [1.07585859s]
Feb 20 01:03:56.395: INFO: Got endpoints: latency-svc-qskbx [990.308455ms]
Feb 20 01:03:56.401: INFO: Created: latency-svc-qkmx9
Feb 20 01:03:56.401: INFO: Got endpoints: latency-svc-qkmx9 [974.653913ms]
Feb 20 01:03:56.453: INFO: Created: latency-svc-g8g7m
Feb 20 01:03:56.496: INFO: Got endpoints: latency-svc-g8g7m [1.040399643s]
Feb 20 01:03:56.532: INFO: Created: latency-svc-qhtdh
Feb 20 01:03:56.542: INFO: Got endpoints: latency-svc-qhtdh [998.708394ms]
Feb 20 01:03:56.638: INFO: Created: latency-svc-5h7qw
Feb 20 01:03:56.645: INFO: Got endpoints: latency-svc-5h7qw [1.087467065s]
Feb 20 01:03:56.680: INFO: Created: latency-svc-lkcqf
Feb 20 01:03:56.685: INFO: Got endpoints: latency-svc-lkcqf [1.093787975s]
Feb 20 01:03:56.718: INFO: Created: latency-svc-gpgzj
Feb 20 01:03:56.730: INFO: Got endpoints: latency-svc-gpgzj [1.011583919s]
Feb 20 01:03:56.792: INFO: Created: latency-svc-8ngx4
Feb 20 01:03:56.796: INFO: Got endpoints: latency-svc-8ngx4 [1.048845483s]
Feb 20 01:03:56.823: INFO: Created: latency-svc-bndkt
Feb 20 01:03:56.827: INFO: Got endpoints: latency-svc-bndkt [939.128106ms]
Feb 20 01:03:56.851: INFO: Created: latency-svc-wp74f
Feb 20 01:03:56.858: INFO: Got endpoints: latency-svc-wp74f [957.118435ms]
Feb 20 01:03:56.877: INFO: Created: latency-svc-68nn9
Feb 20 01:03:56.926: INFO: Got endpoints: latency-svc-68nn9 [871.728759ms]
Feb 20 01:03:56.930: INFO: Created: latency-svc-hftrr
Feb 20 01:03:56.954: INFO: Got endpoints: latency-svc-hftrr [762.095054ms]
Feb 20 01:03:56.954: INFO: Created: latency-svc-nbz9d
Feb 20 01:03:56.969: INFO: Got endpoints: latency-svc-nbz9d [773.84085ms]
Feb 20 01:03:56.993: INFO: Created: latency-svc-79k87
Feb 20 01:03:57.000: INFO: Got endpoints: latency-svc-79k87 [762.051212ms]
Feb 20 01:03:57.094: INFO: Created: latency-svc-bcjcj
Feb 20 01:03:57.130: INFO: Created: latency-svc-znp5f
Feb 20 01:03:57.130: INFO: Got endpoints: latency-svc-bcjcj [774.518069ms]
Feb 20 01:03:57.167: INFO: Got endpoints: latency-svc-znp5f [771.245697ms]
Feb 20 01:03:57.226: INFO: Created: latency-svc-jv8tl
Feb 20 01:03:57.246: INFO: Got endpoints: latency-svc-jv8tl [844.635416ms]
Feb 20 01:03:57.248: INFO: Created: latency-svc-mzgbn
Feb 20 01:03:57.253: INFO: Got endpoints: latency-svc-mzgbn [755.972494ms]
Feb 20 01:03:57.271: INFO: Created: latency-svc-ht2x8
Feb 20 01:03:57.287: INFO: Got endpoints: latency-svc-ht2x8 [744.738249ms]
Feb 20 01:03:57.289: INFO: Created: latency-svc-9csr8
Feb 20 01:03:57.299: INFO: Got endpoints: latency-svc-9csr8 [652.961616ms]
Feb 20 01:03:57.364: INFO: Created: latency-svc-c666v
Feb 20 01:03:57.399: INFO: Got endpoints: latency-svc-c666v [712.834507ms]
Feb 20 01:03:57.399: INFO: Created: latency-svc-9nqrq
Feb 20 01:03:57.404: INFO: Got endpoints: latency-svc-9nqrq [673.727364ms]
Feb 20 01:03:57.423: INFO: Created: latency-svc-4k6gp
Feb 20 01:03:57.426: INFO: Got endpoints: latency-svc-4k6gp [629.571953ms]
Feb 20 01:03:57.451: INFO: Created: latency-svc-nx2sb
Feb 20 01:03:57.462: INFO: Got endpoints: latency-svc-nx2sb [635.208807ms]
Feb 20 01:03:57.515: INFO: Created: latency-svc-ntp86
Feb 20 01:03:57.522: INFO: Got endpoints: latency-svc-ntp86 [664.425221ms]
Feb 20 01:03:57.559: INFO: Created: latency-svc-k847c
Feb 20 01:03:57.584: INFO: Got endpoints: latency-svc-k847c [657.654911ms]
Feb 20 01:03:57.584: INFO: Created: latency-svc-5lggj
Feb 20 01:03:57.601: INFO: Got endpoints: latency-svc-5lggj [646.890396ms]
Feb 20 01:03:57.602: INFO: Created: latency-svc-wkl69
Feb 20 01:03:57.611: INFO: Got endpoints: latency-svc-wkl69 [641.551311ms]
Feb 20 01:03:57.669: INFO: Created: latency-svc-7mtvx
Feb 20 01:03:57.687: INFO: Got endpoints: latency-svc-7mtvx [687.335878ms]
Feb 20 01:03:57.714: INFO: Created: latency-svc-dddzn
Feb 20 01:03:57.719: INFO: Got endpoints: latency-svc-dddzn [588.997626ms]
Feb 20 01:03:57.744: INFO: Created: latency-svc-clprh
Feb 20 01:03:57.750: INFO: Got endpoints: latency-svc-clprh [582.769855ms]
Feb 20 01:03:57.809: INFO: Created: latency-svc-c67fx
Feb 20 01:03:57.822: INFO: Got endpoints: latency-svc-c67fx [576.322436ms]
Feb 20 01:03:57.850: INFO: Created: latency-svc-2k2v7
Feb 20 01:03:57.861: INFO: Got endpoints: latency-svc-2k2v7 [608.673077ms]
Feb 20 01:03:57.898: INFO: Created: latency-svc-xzj8v
Feb 20 01:03:57.974: INFO: Got endpoints: latency-svc-xzj8v [687.031285ms]
Feb 20 01:03:58.005: INFO: Created: latency-svc-hlh94
Feb 20 01:03:58.015: INFO: Got endpoints: latency-svc-hlh94 [716.182824ms]
Feb 20 01:03:58.043: INFO: Created: latency-svc-qlpnj
Feb 20 01:03:58.051: INFO: Got endpoints: latency-svc-qlpnj [652.289717ms]
Feb 20 01:03:58.070: INFO: Created: latency-svc-kbnfm
Feb 20 01:03:58.153: INFO: Got endpoints: latency-svc-kbnfm [749.02809ms]
Feb 20 01:03:58.159: INFO: Created: latency-svc-rqbzt
Feb 20 01:03:58.169: INFO: Got endpoints: latency-svc-rqbzt [742.632967ms]
Feb 20 01:03:58.185: INFO: Created: latency-svc-vzq5k
Feb 20 01:03:58.194: INFO: Got endpoints: latency-svc-vzq5k [731.886893ms]
Feb 20 01:03:58.209: INFO: Created: latency-svc-5fjdp
Feb 20 01:03:58.210: INFO: Got endpoints: latency-svc-5fjdp [687.910153ms]
Feb 20 01:03:58.240: INFO: Created: latency-svc-fmxcq
Feb 20 01:03:58.246: INFO: Got endpoints: latency-svc-fmxcq [662.445158ms]
Feb 20 01:03:58.305: INFO: Created: latency-svc-qhfxb
Feb 20 01:03:58.362: INFO: Got endpoints: latency-svc-qhfxb [761.342608ms]
Feb 20 01:03:58.364: INFO: Created: latency-svc-w2qpx
Feb 20 01:03:58.366: INFO: Got endpoints: latency-svc-w2qpx [754.675842ms]
Feb 20 01:03:58.383: INFO: Created: latency-svc-jztrc
Feb 20 01:03:58.385: INFO: Got endpoints: latency-svc-jztrc [697.374907ms]
Feb 20 01:03:58.451: INFO: Created: latency-svc-gjkrb
Feb 20 01:03:58.482: INFO: Got endpoints: latency-svc-gjkrb [761.824912ms]
Feb 20 01:03:58.485: INFO: Created: latency-svc-84k4z
Feb 20 01:03:58.491: INFO: Got endpoints: latency-svc-84k4z [741.375394ms]
Feb 20 01:03:58.508: INFO: Created: latency-svc-ntwtq
Feb 20 01:03:58.519: INFO: Got endpoints: latency-svc-ntwtq [696.381218ms]
Feb 20 01:03:58.533: INFO: Created: latency-svc-62fbt
Feb 20 01:03:58.581: INFO: Got endpoints: latency-svc-62fbt [719.084393ms]
Feb 20 01:03:58.594: INFO: Created: latency-svc-5zxhp
Feb 20 01:03:58.650: INFO: Got endpoints: latency-svc-5zxhp [675.140383ms]
Feb 20 01:03:58.653: INFO: Created: latency-svc-glvzb
Feb 20 01:03:58.660: INFO: Got endpoints: latency-svc-glvzb [644.399646ms]
Feb 20 01:03:58.725: INFO: Created: latency-svc-pwclz
Feb 20 01:03:58.753: INFO: Got endpoints: latency-svc-pwclz [701.782564ms]
Feb 20 01:03:58.761: INFO: Created: latency-svc-q5pq8
Feb 20 01:03:58.771: INFO: Got endpoints: latency-svc-q5pq8 [617.762165ms]
Feb 20 01:03:58.773: INFO: Created: latency-svc-fl9cg
Feb 20 01:03:58.796: INFO: Got endpoints: latency-svc-fl9cg [626.534927ms]
Feb 20 01:03:58.825: INFO: Created: latency-svc-mk55p
Feb 20 01:03:58.896: INFO: Got endpoints: latency-svc-mk55p [702.365593ms]
Feb 20 01:03:58.908: INFO: Created: latency-svc-7gcqg
Feb 20 01:03:58.911: INFO: Got endpoints: latency-svc-7gcqg [700.378706ms]
Feb 20 01:03:58.954: INFO: Created: latency-svc-t5tsp
Feb 20 01:03:58.992: INFO: Got endpoints: latency-svc-t5tsp [745.603833ms]
Feb 20 01:03:59.067: INFO: Created: latency-svc-xklt9
Feb 20 01:03:59.082: INFO: Got endpoints: latency-svc-xklt9 [719.901421ms]
Feb 20 01:03:59.136: INFO: Created: latency-svc-75fsr
Feb 20 01:03:59.146: INFO: Got endpoints: latency-svc-75fsr [780.338077ms]
Feb 20 01:03:59.275: INFO: Created: latency-svc-8x7bj
Feb 20 01:03:59.312: INFO: Got endpoints: latency-svc-8x7bj [927.64596ms]
Feb 20 01:03:59.368: INFO: Created: latency-svc-jl8jc
Feb 20 01:03:59.459: INFO: Got endpoints: latency-svc-jl8jc [976.64457ms]
Feb 20 01:03:59.480: INFO: Created: latency-svc-g8ljt
Feb 20 01:03:59.494: INFO: Got endpoints: latency-svc-g8ljt [1.002650231s]
Feb 20 01:03:59.543: INFO: Created: latency-svc-4q44q
Feb 20 01:03:59.549: INFO: Got endpoints: latency-svc-4q44q [1.02959271s]
Feb 20 01:03:59.663: INFO: Created: latency-svc-6m5cn
Feb 20 01:03:59.665: INFO: Got endpoints: latency-svc-6m5cn [1.084164622s]
Feb 20 01:03:59.733: INFO: Created: latency-svc-nkfpg
Feb 20 01:03:59.738: INFO: Got endpoints: latency-svc-nkfpg [1.088591055s]
Feb 20 01:03:59.813: INFO: Created: latency-svc-6qd5p
Feb 20 01:03:59.838: INFO: Created: latency-svc-stwgh
Feb 20 01:03:59.839: INFO: Got endpoints: latency-svc-6qd5p [1.179417245s]
Feb 20 01:03:59.865: INFO: Got endpoints: latency-svc-stwgh [1.11166582s]
Feb 20 01:03:59.893: INFO: Created: latency-svc-pf8m6
Feb 20 01:03:59.897: INFO: Got endpoints: latency-svc-pf8m6 [1.12650227s]
Feb 20 01:03:59.969: INFO: Created: latency-svc-7vvv8
Feb 20 01:03:59.974: INFO: Got endpoints: latency-svc-7vvv8 [1.178149282s]
Feb 20 01:03:59.998: INFO: Created: latency-svc-xww4m
Feb 20 01:04:00.056: INFO: Got endpoints: latency-svc-xww4m [1.159205569s]
Feb 20 01:04:00.057: INFO: Created: latency-svc-n79s6
Feb 20 01:04:00.133: INFO: Got endpoints: latency-svc-n79s6 [1.222347268s]
Feb 20 01:04:00.153: INFO: Created: latency-svc-bhfr8
Feb 20 01:04:00.185: INFO: Got endpoints: latency-svc-bhfr8 [1.192585777s]
Feb 20 01:04:00.190: INFO: Created: latency-svc-m64vb
Feb 20 01:04:00.212: INFO: Got endpoints: latency-svc-m64vb [1.129764943s]
Feb 20 01:04:00.315: INFO: Created: latency-svc-vb5s4
Feb 20 01:04:00.334: INFO: Got endpoints: latency-svc-vb5s4 [1.187306785s]
Feb 20 01:04:00.336: INFO: Created: latency-svc-z9v9k
Feb 20 01:04:00.350: INFO: Got endpoints: latency-svc-z9v9k [1.037601534s]
Feb 20 01:04:00.386: INFO: Created: latency-svc-r5dm2
Feb 20 01:04:00.492: INFO: Got endpoints: latency-svc-r5dm2 [1.033204017s]
Feb 20 01:04:00.518: INFO: Created: latency-svc-2hfdg
Feb 20 01:04:00.532: INFO: Got endpoints: latency-svc-2hfdg [1.037525546s]
Feb 20 01:04:00.581: INFO: Created: latency-svc-2lzkf
Feb 20 01:04:00.583: INFO: Got endpoints: latency-svc-2lzkf [1.034202219s]
Feb 20 01:04:00.662: INFO: Created: latency-svc-sw4h8
Feb 20 01:04:00.699: INFO: Got endpoints: latency-svc-sw4h8 [1.033593888s]
Feb 20 01:04:00.735: INFO: Created: latency-svc-gz7nd
Feb 20 01:04:00.755: INFO: Got endpoints: latency-svc-gz7nd [1.016109784s]
Feb 20 01:04:00.803: INFO: Created: latency-svc-9w992
Feb 20 01:04:00.805: INFO: Got endpoints: latency-svc-9w992 [965.7058ms]
Feb 20 01:04:00.856: INFO: Created: latency-svc-8rcdr
Feb 20 01:04:00.873: INFO: Got endpoints: latency-svc-8rcdr [1.007376102s]
Feb 20 01:04:00.990: INFO: Created: latency-svc-ps5fq
Feb 20 01:04:01.003: INFO: Got endpoints: latency-svc-ps5fq [1.10574495s]
Feb 20 01:04:01.050: INFO: Created: latency-svc-wlgqm
Feb 20 01:04:01.178: INFO: Got endpoints: latency-svc-wlgqm [1.204129547s]
Feb 20 01:04:01.227: INFO: Created: latency-svc-hn9s4
Feb 20 01:04:01.236: INFO: Got endpoints: latency-svc-hn9s4 [1.18016688s]
Feb 20 01:04:01.373: INFO: Created: latency-svc-gfchv
Feb 20 01:04:01.394: INFO: Got endpoints: latency-svc-gfchv [1.260011772s]
Feb 20 01:04:01.399: INFO: Created: latency-svc-ktkhw
Feb 20 01:04:01.410: INFO: Got endpoints: latency-svc-ktkhw [1.224356202s]
Feb 20 01:04:01.444: INFO: Created: latency-svc-fbw77
Feb 20 01:04:01.452: INFO: Got endpoints: latency-svc-fbw77 [1.239747519s]
Feb 20 01:04:01.520: INFO: Created: latency-svc-rl49g
Feb 20 01:04:01.524: INFO: Got endpoints: latency-svc-rl49g [1.190064664s]
Feb 20 01:04:01.560: INFO: Created: latency-svc-49dwt
Feb 20 01:04:01.601: INFO: Got endpoints: latency-svc-49dwt [1.250359974s]
Feb 20 01:04:01.738: INFO: Created: latency-svc-7r9mj
Feb 20 01:04:01.765: INFO: Created: latency-svc-dnf7j
Feb 20 01:04:01.767: INFO: Got endpoints: latency-svc-7r9mj [1.274623471s]
Feb 20 01:04:01.770: INFO: Got endpoints: latency-svc-dnf7j [1.238401031s]
Feb 20 01:04:01.800: INFO: Created: latency-svc-gvmxt
Feb 20 01:04:01.807: INFO: Got endpoints: latency-svc-gvmxt [1.223424246s]
Feb 20 01:04:01.826: INFO: Created: latency-svc-5rwmc
Feb 20 01:04:01.829: INFO: Got endpoints: latency-svc-5rwmc [1.129969575s]
Feb 20 01:04:01.899: INFO: Created: latency-svc-pdpdl
Feb 20 01:04:01.909: INFO: Got endpoints: latency-svc-pdpdl [1.154577848s]
Feb 20 01:04:01.931: INFO: Created: latency-svc-5pl7v
Feb 20 01:04:01.941: INFO: Got endpoints: latency-svc-5pl7v [1.136113252s]
Feb 20 01:04:02.121: INFO: Created: latency-svc-d8w9b
Feb 20 01:04:02.139: INFO: Got endpoints: latency-svc-d8w9b [1.265729s]
Feb 20 01:04:02.165: INFO: Created: latency-svc-5xn2h
Feb 20 01:04:02.174: INFO: Got endpoints: latency-svc-5xn2h [1.170422671s]
Feb 20 01:04:02.249: INFO: Created: latency-svc-t6rr6
Feb 20 01:04:02.256: INFO: Got endpoints: latency-svc-t6rr6 [1.077643806s]
Feb 20 01:04:02.279: INFO: Created: latency-svc-g52jj
Feb 20 01:04:02.285: INFO: Got endpoints: latency-svc-g52jj [1.049262983s]
Feb 20 01:04:02.307: INFO: Created: latency-svc-zbc79
Feb 20 01:04:02.322: INFO: Got endpoints: latency-svc-zbc79 [927.70477ms]
Feb 20 01:04:02.421: INFO: Created: latency-svc-s4gxz
Feb 20 01:04:02.424: INFO: Got endpoints: latency-svc-s4gxz [1.014326679s]
Feb 20 01:04:02.473: INFO: Created: latency-svc-qnkwk
Feb 20 01:04:02.474: INFO: Got endpoints: latency-svc-qnkwk [1.021641103s]
Feb 20 01:04:02.498: INFO: Created: latency-svc-9dtfm
Feb 20 01:04:02.501: INFO: Got endpoints: latency-svc-9dtfm [976.40976ms]
Feb 20 01:04:02.567: INFO: Created: latency-svc-rf4q5
Feb 20 01:04:02.575: INFO: Got endpoints: latency-svc-rf4q5 [973.969923ms]
Feb 20 01:04:02.615: INFO: Created: latency-svc-kg84k
Feb 20 01:04:02.621: INFO: Got endpoints: latency-svc-kg84k [854.092876ms]
Feb 20 01:04:02.643: INFO: Created: latency-svc-5cmd6
Feb 20 01:04:02.644: INFO: Got endpoints: latency-svc-5cmd6 [873.138594ms]
Feb 20 01:04:02.706: INFO: Created: latency-svc-drfw5
Feb 20 01:04:02.728: INFO: Got endpoints: latency-svc-drfw5 [921.016099ms]
Feb 20 01:04:02.732: INFO: Created: latency-svc-n9k8m
Feb 20 01:04:02.734: INFO: Got endpoints: latency-svc-n9k8m [904.726385ms]
Feb 20 01:04:02.756: INFO: Created: latency-svc-tvz7j
Feb 20 01:04:02.759: INFO: Got endpoints: latency-svc-tvz7j [849.181764ms]
Feb 20 01:04:02.869: INFO: Created: latency-svc-4wvsh
Feb 20 01:04:02.896: INFO: Got endpoints: latency-svc-4wvsh [954.301455ms]
Feb 20 01:04:02.905: INFO: Created: latency-svc-mcb7w
Feb 20 01:04:02.905: INFO: Got endpoints: latency-svc-mcb7w [765.260209ms]
Feb 20 01:04:02.922: INFO: Created: latency-svc-9cz5b
Feb 20 01:04:02.926: INFO: Got endpoints: latency-svc-9cz5b [752.170962ms]
Feb 20 01:04:02.954: INFO: Created: latency-svc-6wqjj
Feb 20 01:04:02.956: INFO: Got endpoints: latency-svc-6wqjj [700.097921ms]
Feb 20 01:04:03.068: INFO: Created: latency-svc-wdh9m
Feb 20 01:04:03.084: INFO: Got endpoints: latency-svc-wdh9m [798.056443ms]
Feb 20 01:04:03.124: INFO: Created: latency-svc-xp6r4
Feb 20 01:04:03.285: INFO: Got endpoints: latency-svc-xp6r4 [963.258088ms]
Feb 20 01:04:03.316: INFO: Created: latency-svc-djhpd
Feb 20 01:04:03.327: INFO: Got endpoints: latency-svc-djhpd [903.0381ms]
Feb 20 01:04:03.382: INFO: Created: latency-svc-c4n7v
Feb 20 01:04:03.443: INFO: Got endpoints: latency-svc-c4n7v [968.778139ms]
Feb 20 01:04:03.456: INFO: Created: latency-svc-8r8ts
Feb 20 01:04:03.464: INFO: Got endpoints: latency-svc-8r8ts [963.193978ms]
Feb 20 01:04:03.495: INFO: Created: latency-svc-rsgln
Feb 20 01:04:03.601: INFO: Created: latency-svc-l2kgq
Feb 20 01:04:03.601: INFO: Got endpoints: latency-svc-rsgln [1.026113118s]
Feb 20 01:04:03.627: INFO: Got endpoints: latency-svc-l2kgq [1.005539211s]
Feb 20 01:04:03.627: INFO: Created: latency-svc-487lh
Feb 20 01:04:03.632: INFO: Got endpoints: latency-svc-487lh [988.713227ms]
Feb 20 01:04:03.653: INFO: Created: latency-svc-5gf98
Feb 20 01:04:03.662: INFO: Got endpoints: latency-svc-5gf98 [934.230174ms]
Feb 20 01:04:03.681: INFO: Created: latency-svc-hgsf8
Feb 20 01:04:03.687: INFO: Got endpoints: latency-svc-hgsf8 [952.655528ms]
Feb 20 01:04:03.738: INFO: Created: latency-svc-9dt58
Feb 20 01:04:03.770: INFO: Got endpoints: latency-svc-9dt58 [1.011197564s]
Feb 20 01:04:03.771: INFO: Created: latency-svc-89r58
Feb 20 01:04:03.776: INFO: Got endpoints: latency-svc-89r58 [880.42479ms]
Feb 20 01:04:03.801: INFO: Created: latency-svc-t5rs2
Feb 20 01:04:03.810: INFO: Got endpoints: latency-svc-t5rs2 [905.144954ms]
Feb 20 01:04:03.840: INFO: Created: latency-svc-gb59b
Feb 20 01:04:03.841: INFO: Got endpoints: latency-svc-gb59b [914.498625ms]
Feb 20 01:04:03.907: INFO: Created: latency-svc-tlhlg
Feb 20 01:04:03.926: INFO: Got endpoints: latency-svc-tlhlg [969.954317ms]
Feb 20 01:04:03.927: INFO: Created: latency-svc-tr2s5
Feb 20 01:04:03.940: INFO: Got endpoints: latency-svc-tr2s5 [855.781684ms]
Feb 20 01:04:03.987: INFO: Created: latency-svc-p2djm
Feb 20 01:04:03.999: INFO: Got endpoints: latency-svc-p2djm [713.922586ms]
Feb 20 01:04:04.168: INFO: Created: latency-svc-4v9c5
Feb 20 01:04:04.191: INFO: Got endpoints: latency-svc-4v9c5 [864.027529ms]
Feb 20 01:04:04.197: INFO: Created: latency-svc-dzwm6
Feb 20 01:04:04.200: INFO: Got endpoints: latency-svc-dzwm6 [756.996794ms]
Feb 20 01:04:04.200: INFO: Latencies: [110.357057ms 138.170317ms 180.915278ms 298.424673ms 435.459079ms 479.586154ms 576.322436ms 582.769855ms 588.997626ms 608.673077ms 617.762165ms 626.534927ms 629.571953ms 633.16409ms 635.208807ms 641.551311ms 644.399646ms 646.890396ms 652.289717ms 652.961616ms 653.479518ms 657.654911ms 662.445158ms 664.425221ms 673.727364ms 675.140383ms 687.031285ms 687.335878ms 687.910153ms 696.381218ms 697.374907ms 700.097921ms 700.378706ms 701.782564ms 702.365593ms 712.834507ms 713.922586ms 716.182824ms 719.084393ms 719.901421ms 731.886893ms 741.375394ms 742.632967ms 744.738249ms 745.603833ms 749.02809ms 752.170962ms 754.675842ms 755.972494ms 756.996794ms 761.342608ms 761.824912ms 762.051212ms 762.095054ms 765.260209ms 765.834456ms 771.245697ms 773.84085ms 774.518069ms 780.338077ms 788.179758ms 797.112883ms 798.056443ms 805.669286ms 811.203954ms 826.073498ms 832.124178ms 832.615292ms 836.551494ms 838.889163ms 843.177342ms 844.635416ms 847.274283ms 849.181764ms 854.092876ms 855.781684ms 860.659973ms 863.99732ms 864.027529ms 871.728759ms 872.733825ms 873.138594ms 876.8459ms 879.143027ms 880.42479ms 903.0381ms 903.757957ms 904.726385ms 905.144954ms 914.389933ms 914.498625ms 915.262824ms 917.644661ms 921.016099ms 924.317879ms 927.64596ms 927.70477ms 934.230174ms 939.128106ms 945.475645ms 952.655528ms 954.301455ms 954.771396ms 957.118435ms 960.010635ms 960.826195ms 962.639683ms 963.193978ms 963.258088ms 965.7058ms 967.570359ms 968.778139ms 969.954317ms 973.477561ms 973.969923ms 974.345672ms 974.653913ms 976.40976ms 976.64457ms 978.868814ms 982.113583ms 987.833103ms 988.713227ms 989.722506ms 990.308455ms 991.061268ms 998.708394ms 999.720823ms 1.002650231s 1.005539211s 1.007376102s 1.011197564s 1.011583919s 1.014326679s 1.016109784s 1.021641103s 1.024024426s 1.026113118s 1.02959271s 1.033204017s 1.033593888s 1.034202219s 1.034463105s 1.036356893s 1.037525546s 1.037601534s 1.040399643s 1.048845483s 1.049262983s 1.072357928s 1.07585859s 1.077294039s 1.077643806s 1.08071267s 1.084164622s 1.086240136s 1.087467065s 1.088591055s 1.093787975s 1.101839114s 1.10574495s 1.11166582s 1.12650227s 1.128969272s 1.129764943s 1.129969575s 1.135461335s 1.136113252s 1.154577848s 1.154594626s 1.157860909s 1.159205569s 1.170422671s 1.171686762s 1.178149282s 1.178604873s 1.179417245s 1.18016688s 1.185950785s 1.187306785s 1.188984886s 1.190064664s 1.192585777s 1.193370199s 1.201774171s 1.203327599s 1.204129547s 1.208186918s 1.2095835s 1.214958523s 1.222347268s 1.223424246s 1.224356202s 1.230204661s 1.238401031s 1.239747519s 1.250359974s 1.260011772s 1.265729s 1.274623471s]
Feb 20 01:04:04.200: INFO: 50 %ile: 952.655528ms
Feb 20 01:04:04.200: INFO: 90 %ile: 1.188984886s
Feb 20 01:04:04.200: INFO: 99 %ile: 1.265729s
Feb 20 01:04:04.200: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:04:04.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-9712" for this suite.

• [SLOW TEST:20.871 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":280,"completed":214,"skipped":3580,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:04:04.214: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-3f42f8b3-581d-4de3-bda1-8cbddfb1c313
STEP: Creating a pod to test consume secrets
Feb 20 01:04:04.480: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-245beba7-3de3-4d26-b57d-ecb3572e3582" in namespace "projected-7628" to be "success or failure"
Feb 20 01:04:04.497: INFO: Pod "pod-projected-secrets-245beba7-3de3-4d26-b57d-ecb3572e3582": Phase="Pending", Reason="", readiness=false. Elapsed: 16.113888ms
Feb 20 01:04:06.555: INFO: Pod "pod-projected-secrets-245beba7-3de3-4d26-b57d-ecb3572e3582": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07441526s
Feb 20 01:04:08.563: INFO: Pod "pod-projected-secrets-245beba7-3de3-4d26-b57d-ecb3572e3582": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082961017s
Feb 20 01:04:10.597: INFO: Pod "pod-projected-secrets-245beba7-3de3-4d26-b57d-ecb3572e3582": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116036788s
Feb 20 01:04:12.628: INFO: Pod "pod-projected-secrets-245beba7-3de3-4d26-b57d-ecb3572e3582": Phase="Pending", Reason="", readiness=false. Elapsed: 8.147305611s
Feb 20 01:04:14.658: INFO: Pod "pod-projected-secrets-245beba7-3de3-4d26-b57d-ecb3572e3582": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.177627557s
STEP: Saw pod success
Feb 20 01:04:14.658: INFO: Pod "pod-projected-secrets-245beba7-3de3-4d26-b57d-ecb3572e3582" satisfied condition "success or failure"
Feb 20 01:04:14.705: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-245beba7-3de3-4d26-b57d-ecb3572e3582 container projected-secret-volume-test: 
STEP: delete the pod
Feb 20 01:04:14.784: INFO: Waiting for pod pod-projected-secrets-245beba7-3de3-4d26-b57d-ecb3572e3582 to disappear
Feb 20 01:04:14.804: INFO: Pod pod-projected-secrets-245beba7-3de3-4d26-b57d-ecb3572e3582 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:04:14.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7628" for this suite.

• [SLOW TEST:10.676 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":215,"skipped":3582,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:04:14.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:04:15.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4420" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":280,"completed":216,"skipped":3584,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:04:15.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-map-0c63fb14-90ee-4772-8a26-746c400a6f9e
STEP: Creating a pod to test consume configMaps
Feb 20 01:04:15.411: INFO: Waiting up to 5m0s for pod "pod-configmaps-52518460-bee1-48c1-8db7-312352513d55" in namespace "configmap-5836" to be "success or failure"
Feb 20 01:04:15.550: INFO: Pod "pod-configmaps-52518460-bee1-48c1-8db7-312352513d55": Phase="Pending", Reason="", readiness=false. Elapsed: 138.555059ms
Feb 20 01:04:17.574: INFO: Pod "pod-configmaps-52518460-bee1-48c1-8db7-312352513d55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163059237s
Feb 20 01:04:19.638: INFO: Pod "pod-configmaps-52518460-bee1-48c1-8db7-312352513d55": Phase="Pending", Reason="", readiness=false. Elapsed: 4.226883754s
Feb 20 01:04:21.669: INFO: Pod "pod-configmaps-52518460-bee1-48c1-8db7-312352513d55": Phase="Pending", Reason="", readiness=false. Elapsed: 6.258287956s
Feb 20 01:04:23.740: INFO: Pod "pod-configmaps-52518460-bee1-48c1-8db7-312352513d55": Phase="Pending", Reason="", readiness=false. Elapsed: 8.328491438s
Feb 20 01:04:25.790: INFO: Pod "pod-configmaps-52518460-bee1-48c1-8db7-312352513d55": Phase="Pending", Reason="", readiness=false. Elapsed: 10.379260122s
Feb 20 01:04:27.843: INFO: Pod "pod-configmaps-52518460-bee1-48c1-8db7-312352513d55": Phase="Pending", Reason="", readiness=false. Elapsed: 12.43174419s
Feb 20 01:04:29.888: INFO: Pod "pod-configmaps-52518460-bee1-48c1-8db7-312352513d55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.477117339s
STEP: Saw pod success
Feb 20 01:04:29.888: INFO: Pod "pod-configmaps-52518460-bee1-48c1-8db7-312352513d55" satisfied condition "success or failure"
Feb 20 01:04:29.892: INFO: Trying to get logs from node jerma-node pod pod-configmaps-52518460-bee1-48c1-8db7-312352513d55 container configmap-volume-test: 
STEP: delete the pod
Feb 20 01:04:31.072: INFO: Waiting for pod pod-configmaps-52518460-bee1-48c1-8db7-312352513d55 to disappear
Feb 20 01:04:31.079: INFO: Pod pod-configmaps-52518460-bee1-48c1-8db7-312352513d55 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:04:31.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5836" for this suite.

• [SLOW TEST:16.085 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":217,"skipped":3591,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:04:31.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-zj2wc in namespace proxy-1451
I0220 01:04:31.899072       9 runners.go:189] Created replication controller with name: proxy-service-zj2wc, namespace: proxy-1451, replica count: 1
I0220 01:04:32.950195       9 runners.go:189] proxy-service-zj2wc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 01:04:33.950956       9 runners.go:189] proxy-service-zj2wc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 01:04:34.951520       9 runners.go:189] proxy-service-zj2wc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 01:04:35.951909       9 runners.go:189] proxy-service-zj2wc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 01:04:36.952329       9 runners.go:189] proxy-service-zj2wc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 01:04:37.952855       9 runners.go:189] proxy-service-zj2wc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 01:04:38.953299       9 runners.go:189] proxy-service-zj2wc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 01:04:39.953757       9 runners.go:189] proxy-service-zj2wc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 01:04:40.954433       9 runners.go:189] proxy-service-zj2wc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 01:04:41.955192       9 runners.go:189] proxy-service-zj2wc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 01:04:42.955931       9 runners.go:189] proxy-service-zj2wc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 20 01:04:42.978: INFO: setup took 11.297183191s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb 20 01:04:43.019: INFO: (0) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:160/proxy/: foo (200; 39.78886ms)
Feb 20 01:04:43.019: INFO: (0) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:160/proxy/: foo (200; 39.357159ms)
Feb 20 01:04:43.019: INFO: (0) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5/proxy/: test (200; 39.327374ms)
Feb 20 01:04:43.020: INFO: (0) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 39.5199ms)
Feb 20 01:04:43.020: INFO: (0) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:1080/proxy/: test<... (200; 40.313788ms)
Feb 20 01:04:43.021: INFO: (0) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 40.934042ms)
Feb 20 01:04:43.021: INFO: (0) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:1080/proxy/: ... (200; 41.241686ms)
Feb 20 01:04:43.021: INFO: (0) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:460/proxy/: tls baz (200; 42.904188ms)
Feb 20 01:04:43.022: INFO: (0) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:443/proxy/: ... (200; 76.434997ms)
Feb 20 01:04:43.154: INFO: (1) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 76.168429ms)
Feb 20 01:04:43.154: INFO: (1) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5/proxy/: test (200; 76.306745ms)
Feb 20 01:04:43.154: INFO: (1) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:160/proxy/: foo (200; 77.122923ms)
Feb 20 01:04:43.156: INFO: (1) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 78.202083ms)
Feb 20 01:04:43.157: INFO: (1) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:160/proxy/: foo (200; 79.275467ms)
Feb 20 01:04:43.157: INFO: (1) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:462/proxy/: tls qux (200; 79.689642ms)
Feb 20 01:04:43.157: INFO: (1) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:1080/proxy/: test<... (200; 79.642483ms)
Feb 20 01:04:43.158: INFO: (1) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:460/proxy/: tls baz (200; 80.795244ms)
Feb 20 01:04:43.158: INFO: (1) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:443/proxy/: ... (200; 20.594028ms)
Feb 20 01:04:43.184: INFO: (2) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:160/proxy/: foo (200; 22.346045ms)
Feb 20 01:04:43.185: INFO: (2) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 23.243099ms)
Feb 20 01:04:43.185: INFO: (2) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 23.770964ms)
Feb 20 01:04:43.185: INFO: (2) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:443/proxy/: test (200; 27.547536ms)
Feb 20 01:04:43.189: INFO: (2) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:1080/proxy/: test<... (200; 27.27214ms)
Feb 20 01:04:43.189: INFO: (2) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:460/proxy/: tls baz (200; 27.45252ms)
Feb 20 01:04:43.207: INFO: (3) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:160/proxy/: foo (200; 18.236941ms)
Feb 20 01:04:43.208: INFO: (3) /api/v1/namespaces/proxy-1451/services/https:proxy-service-zj2wc:tlsportname2/proxy/: tls qux (200; 17.820343ms)
Feb 20 01:04:43.208: INFO: (3) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:160/proxy/: foo (200; 18.376238ms)
Feb 20 01:04:43.208: INFO: (3) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:460/proxy/: tls baz (200; 18.377752ms)
Feb 20 01:04:43.208: INFO: (3) /api/v1/namespaces/proxy-1451/services/proxy-service-zj2wc:portname1/proxy/: foo (200; 18.754093ms)
Feb 20 01:04:43.214: INFO: (3) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:443/proxy/: test (200; 24.683269ms)
Feb 20 01:04:43.214: INFO: (3) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:1080/proxy/: ... (200; 24.484529ms)
Feb 20 01:04:43.214: INFO: (3) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 24.461674ms)
Feb 20 01:04:43.214: INFO: (3) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:1080/proxy/: test<... (200; 24.848939ms)
Feb 20 01:04:43.217: INFO: (3) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:462/proxy/: tls qux (200; 27.664638ms)
Feb 20 01:04:43.236: INFO: (3) /api/v1/namespaces/proxy-1451/services/http:proxy-service-zj2wc:portname2/proxy/: bar (200; 46.776941ms)
Feb 20 01:04:43.236: INFO: (3) /api/v1/namespaces/proxy-1451/services/https:proxy-service-zj2wc:tlsportname1/proxy/: tls baz (200; 46.867842ms)
Feb 20 01:04:43.236: INFO: (3) /api/v1/namespaces/proxy-1451/services/proxy-service-zj2wc:portname2/proxy/: bar (200; 47.333197ms)
Feb 20 01:04:43.236: INFO: (3) /api/v1/namespaces/proxy-1451/services/http:proxy-service-zj2wc:portname1/proxy/: foo (200; 46.966756ms)
Feb 20 01:04:43.301: INFO: (4) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:462/proxy/: tls qux (200; 63.479062ms)
Feb 20 01:04:43.301: INFO: (4) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:1080/proxy/: test<... (200; 63.296473ms)
Feb 20 01:04:43.302: INFO: (4) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:160/proxy/: foo (200; 66.013879ms)
Feb 20 01:04:43.302: INFO: (4) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:460/proxy/: tls baz (200; 64.981892ms)
Feb 20 01:04:43.303: INFO: (4) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:443/proxy/: ... (200; 65.115838ms)
Feb 20 01:04:43.303: INFO: (4) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 65.663562ms)
Feb 20 01:04:43.303: INFO: (4) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5/proxy/: test (200; 65.401975ms)
Feb 20 01:04:43.303: INFO: (4) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 65.072726ms)
Feb 20 01:04:43.303: INFO: (4) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:160/proxy/: foo (200; 65.435778ms)
Feb 20 01:04:43.304: INFO: (4) /api/v1/namespaces/proxy-1451/services/https:proxy-service-zj2wc:tlsportname2/proxy/: tls qux (200; 66.678067ms)
Feb 20 01:04:43.304: INFO: (4) /api/v1/namespaces/proxy-1451/services/http:proxy-service-zj2wc:portname2/proxy/: bar (200; 66.648264ms)
Feb 20 01:04:43.304: INFO: (4) /api/v1/namespaces/proxy-1451/services/proxy-service-zj2wc:portname1/proxy/: foo (200; 66.776101ms)
Feb 20 01:04:43.304: INFO: (4) /api/v1/namespaces/proxy-1451/services/https:proxy-service-zj2wc:tlsportname1/proxy/: tls baz (200; 67.272988ms)
Feb 20 01:04:43.304: INFO: (4) /api/v1/namespaces/proxy-1451/services/http:proxy-service-zj2wc:portname1/proxy/: foo (200; 67.189836ms)
Feb 20 01:04:43.305: INFO: (4) /api/v1/namespaces/proxy-1451/services/proxy-service-zj2wc:portname2/proxy/: bar (200; 68.497649ms)
Feb 20 01:04:43.322: INFO: (5) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:160/proxy/: foo (200; 15.559451ms)
Feb 20 01:04:43.322: INFO: (5) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:462/proxy/: tls qux (200; 15.565674ms)
Feb 20 01:04:43.322: INFO: (5) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:460/proxy/: tls baz (200; 15.681792ms)
Feb 20 01:04:43.322: INFO: (5) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5/proxy/: test (200; 16.261552ms)
Feb 20 01:04:43.322: INFO: (5) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 16.426202ms)
Feb 20 01:04:43.323: INFO: (5) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 16.413856ms)
Feb 20 01:04:43.323: INFO: (5) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:1080/proxy/: test<... (200; 16.888831ms)
Feb 20 01:04:43.324: INFO: (5) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:160/proxy/: foo (200; 18.657966ms)
Feb 20 01:04:43.324: INFO: (5) /api/v1/namespaces/proxy-1451/services/proxy-service-zj2wc:portname2/proxy/: bar (200; 18.268026ms)
Feb 20 01:04:43.324: INFO: (5) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:443/proxy/: ... (200; 20.284468ms)
Feb 20 01:04:43.329: INFO: (5) /api/v1/namespaces/proxy-1451/services/http:proxy-service-zj2wc:portname1/proxy/: foo (200; 23.353083ms)
Feb 20 01:04:43.331: INFO: (5) /api/v1/namespaces/proxy-1451/services/https:proxy-service-zj2wc:tlsportname2/proxy/: tls qux (200; 24.439474ms)
Feb 20 01:04:43.340: INFO: (5) /api/v1/namespaces/proxy-1451/services/proxy-service-zj2wc:portname1/proxy/: foo (200; 33.952448ms)
Feb 20 01:04:43.383: INFO: (5) /api/v1/namespaces/proxy-1451/services/https:proxy-service-zj2wc:tlsportname1/proxy/: tls baz (200; 77.659881ms)
Feb 20 01:04:43.402: INFO: (6) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:460/proxy/: tls baz (200; 18.392964ms)
Feb 20 01:04:43.402: INFO: (6) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:160/proxy/: foo (200; 18.539961ms)
Feb 20 01:04:43.403: INFO: (6) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 18.674662ms)
Feb 20 01:04:43.403: INFO: (6) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:1080/proxy/: test<... (200; 19.41292ms)
Feb 20 01:04:43.403: INFO: (6) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:443/proxy/: test (200; 25.969165ms)
Feb 20 01:04:43.410: INFO: (6) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:1080/proxy/: ... (200; 25.749269ms)
Feb 20 01:04:43.410: INFO: (6) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 26.247779ms)
Feb 20 01:04:43.410: INFO: (6) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:462/proxy/: tls qux (200; 26.335065ms)
Feb 20 01:04:43.437: INFO: (6) /api/v1/namespaces/proxy-1451/services/https:proxy-service-zj2wc:tlsportname1/proxy/: tls baz (200; 53.220209ms)
Feb 20 01:04:43.437: INFO: (6) /api/v1/namespaces/proxy-1451/services/proxy-service-zj2wc:portname2/proxy/: bar (200; 53.193677ms)
Feb 20 01:04:43.437: INFO: (6) /api/v1/namespaces/proxy-1451/services/proxy-service-zj2wc:portname1/proxy/: foo (200; 53.385989ms)
Feb 20 01:04:43.437: INFO: (6) /api/v1/namespaces/proxy-1451/services/http:proxy-service-zj2wc:portname2/proxy/: bar (200; 53.228426ms)
Feb 20 01:04:43.437: INFO: (6) /api/v1/namespaces/proxy-1451/services/https:proxy-service-zj2wc:tlsportname2/proxy/: tls qux (200; 53.579342ms)
Feb 20 01:04:43.437: INFO: (6) /api/v1/namespaces/proxy-1451/services/http:proxy-service-zj2wc:portname1/proxy/: foo (200; 53.561432ms)
Feb 20 01:04:43.464: INFO: (7) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:1080/proxy/: ... (200; 25.049356ms)
Feb 20 01:04:43.464: INFO: (7) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5/proxy/: test (200; 25.939938ms)
Feb 20 01:04:43.465: INFO: (7) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:1080/proxy/: test<... (200; 25.652032ms)
Feb 20 01:04:43.465: INFO: (7) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:460/proxy/: tls baz (200; 27.268475ms)
Feb 20 01:04:43.465: INFO: (7) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:443/proxy/: test<... (200; 32.923018ms)
Feb 20 01:04:43.508: INFO: (8) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5/proxy/: test (200; 34.33199ms)
Feb 20 01:04:43.508: INFO: (8) /api/v1/namespaces/proxy-1451/services/http:proxy-service-zj2wc:portname1/proxy/: foo (200; 34.012017ms)
Feb 20 01:04:43.509: INFO: (8) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:1080/proxy/: ... (200; 34.353562ms)
Feb 20 01:04:43.509: INFO: (8) /api/v1/namespaces/proxy-1451/services/proxy-service-zj2wc:portname1/proxy/: foo (200; 34.65092ms)
Feb 20 01:04:43.509: INFO: (8) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 34.407384ms)
Feb 20 01:04:43.509: INFO: (8) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:443/proxy/: test<... (200; 8.161849ms)
Feb 20 01:04:43.520: INFO: (9) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5/proxy/: test (200; 10.11619ms)
Feb 20 01:04:43.521: INFO: (9) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:460/proxy/: tls baz (200; 10.18505ms)
Feb 20 01:04:43.521: INFO: (9) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:443/proxy/: ... (200; 23.12634ms)
Feb 20 01:04:43.534: INFO: (9) /api/v1/namespaces/proxy-1451/services/proxy-service-zj2wc:portname1/proxy/: foo (200; 23.211621ms)
Feb 20 01:04:43.534: INFO: (9) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 23.777712ms)
Feb 20 01:04:43.534: INFO: (9) /api/v1/namespaces/proxy-1451/services/https:proxy-service-zj2wc:tlsportname1/proxy/: tls baz (200; 23.722724ms)
Feb 20 01:04:43.534: INFO: (9) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:160/proxy/: foo (200; 23.706861ms)
Feb 20 01:04:43.534: INFO: (9) /api/v1/namespaces/proxy-1451/services/proxy-service-zj2wc:portname2/proxy/: bar (200; 23.220685ms)
Feb 20 01:04:43.535: INFO: (9) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:160/proxy/: foo (200; 24.920621ms)
Feb 20 01:04:43.537: INFO: (9) /api/v1/namespaces/proxy-1451/services/http:proxy-service-zj2wc:portname1/proxy/: foo (200; 26.924625ms)
Feb 20 01:04:43.538: INFO: (9) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 27.00208ms)
Feb 20 01:04:43.538: INFO: (9) /api/v1/namespaces/proxy-1451/services/http:proxy-service-zj2wc:portname2/proxy/: bar (200; 27.093025ms)
Feb 20 01:04:43.538: INFO: (9) /api/v1/namespaces/proxy-1451/services/https:proxy-service-zj2wc:tlsportname2/proxy/: tls qux (200; 27.264637ms)
Feb 20 01:04:43.597: INFO: (10) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:160/proxy/: foo (200; 58.248964ms)
Feb 20 01:04:43.598: INFO: (10) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5/proxy/: test (200; 59.644111ms)
Feb 20 01:04:43.598: INFO: (10) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:1080/proxy/: test<... (200; 59.464537ms)
Feb 20 01:04:43.598: INFO: (10) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:462/proxy/: tls qux (200; 59.743926ms)
Feb 20 01:04:43.598: INFO: (10) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 60.112268ms)
Feb 20 01:04:43.598: INFO: (10) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 60.181876ms)
Feb 20 01:04:43.598: INFO: (10) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:1080/proxy/: ... (200; 60.003696ms)
Feb 20 01:04:43.598: INFO: (10) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:160/proxy/: foo (200; 59.992525ms)
Feb 20 01:04:43.599: INFO: (10) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:443/proxy/: test<... (200; 32.050192ms)
Feb 20 01:04:43.662: INFO: (11) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:462/proxy/: tls qux (200; 32.62178ms)
Feb 20 01:04:43.662: INFO: (11) /api/v1/namespaces/proxy-1451/services/http:proxy-service-zj2wc:portname2/proxy/: bar (200; 32.217264ms)
Feb 20 01:04:43.662: INFO: (11) /api/v1/namespaces/proxy-1451/services/http:proxy-service-zj2wc:portname1/proxy/: foo (200; 32.43055ms)
Feb 20 01:04:43.662: INFO: (11) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5/proxy/: test (200; 32.776267ms)
Feb 20 01:04:43.662: INFO: (11) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:443/proxy/: ... (200; 32.014514ms)
Feb 20 01:04:43.663: INFO: (11) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:160/proxy/: foo (200; 33.281865ms)
Feb 20 01:04:43.663: INFO: (11) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:460/proxy/: tls baz (200; 34.05463ms)
Feb 20 01:04:43.666: INFO: (11) /api/v1/namespaces/proxy-1451/services/proxy-service-zj2wc:portname1/proxy/: foo (200; 36.191813ms)
Feb 20 01:04:43.666: INFO: (11) /api/v1/namespaces/proxy-1451/services/https:proxy-service-zj2wc:tlsportname1/proxy/: tls baz (200; 35.289731ms)
Feb 20 01:04:43.667: INFO: (11) /api/v1/namespaces/proxy-1451/services/https:proxy-service-zj2wc:tlsportname2/proxy/: tls qux (200; 37.532268ms)
Feb 20 01:04:43.667: INFO: (11) /api/v1/namespaces/proxy-1451/services/proxy-service-zj2wc:portname2/proxy/: bar (200; 36.755184ms)
Feb 20 01:04:43.673: INFO: (12) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:160/proxy/: foo (200; 5.894937ms)
Feb 20 01:04:43.680: INFO: (12) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:160/proxy/: foo (200; 12.901276ms)
Feb 20 01:04:43.681: INFO: (12) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 13.199516ms)
Feb 20 01:04:43.681: INFO: (12) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:460/proxy/: tls baz (200; 13.362633ms)
Feb 20 01:04:43.681: INFO: (12) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:443/proxy/: test<... (200; 13.666263ms)
Feb 20 01:04:43.681: INFO: (12) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 13.679925ms)
Feb 20 01:04:43.681: INFO: (12) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:1080/proxy/: ... (200; 12.99481ms)
Feb 20 01:04:43.681: INFO: (12) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5/proxy/: test (200; 13.651815ms)
Feb 20 01:04:43.724: INFO: (12) /api/v1/namespaces/proxy-1451/services/proxy-service-zj2wc:portname1/proxy/: foo (200; 56.350373ms)
Feb 20 01:04:43.724: INFO: (12) /api/v1/namespaces/proxy-1451/services/https:proxy-service-zj2wc:tlsportname2/proxy/: tls qux (200; 56.169564ms)
Feb 20 01:04:43.724: INFO: (12) /api/v1/namespaces/proxy-1451/services/http:proxy-service-zj2wc:portname1/proxy/: foo (200; 56.349314ms)
Feb 20 01:04:43.724: INFO: (12) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:462/proxy/: tls qux (200; 56.551549ms)
Feb 20 01:04:43.724: INFO: (12) /api/v1/namespaces/proxy-1451/services/proxy-service-zj2wc:portname2/proxy/: bar (200; 56.246985ms)
Feb 20 01:04:43.724: INFO: (12) /api/v1/namespaces/proxy-1451/services/https:proxy-service-zj2wc:tlsportname1/proxy/: tls baz (200; 56.676846ms)
Feb 20 01:04:43.724: INFO: (12) /api/v1/namespaces/proxy-1451/services/http:proxy-service-zj2wc:portname2/proxy/: bar (200; 56.452753ms)
Feb 20 01:04:43.737: INFO: (13) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:1080/proxy/: ... (200; 12.107959ms)
Feb 20 01:04:43.737: INFO: (13) /api/v1/namespaces/proxy-1451/services/proxy-service-zj2wc:portname1/proxy/: foo (200; 12.680774ms)
Feb 20 01:04:43.737: INFO: (13) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:462/proxy/: tls qux (200; 12.348599ms)
Feb 20 01:04:43.737: INFO: (13) /api/v1/namespaces/proxy-1451/services/https:proxy-service-zj2wc:tlsportname1/proxy/: tls baz (200; 12.494427ms)
Feb 20 01:04:43.737: INFO: (13) /api/v1/namespaces/proxy-1451/services/http:proxy-service-zj2wc:portname2/proxy/: bar (200; 12.509057ms)
Feb 20 01:04:43.737: INFO: (13) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:160/proxy/: foo (200; 12.501802ms)
Feb 20 01:04:43.739: INFO: (13) /api/v1/namespaces/proxy-1451/services/http:proxy-service-zj2wc:portname1/proxy/: foo (200; 14.510496ms)
Feb 20 01:04:43.739: INFO: (13) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5/proxy/: test (200; 14.850632ms)
Feb 20 01:04:43.739: INFO: (13) /api/v1/namespaces/proxy-1451/services/https:proxy-service-zj2wc:tlsportname2/proxy/: tls qux (200; 14.461458ms)
Feb 20 01:04:43.739: INFO: (13) /api/v1/namespaces/proxy-1451/services/proxy-service-zj2wc:portname2/proxy/: bar (200; 14.492959ms)
Feb 20 01:04:43.739: INFO: (13) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 14.574027ms)
Feb 20 01:04:43.739: INFO: (13) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:1080/proxy/: test<... (200; 15.030201ms)
Feb 20 01:04:43.739: INFO: (13) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:443/proxy/: test<... (200; 13.078315ms)
Feb 20 01:04:43.755: INFO: (14) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 13.516298ms)
Feb 20 01:04:43.755: INFO: (14) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:443/proxy/: ... (200; 22.329984ms)
Feb 20 01:04:43.764: INFO: (14) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:160/proxy/: foo (200; 22.353501ms)
Feb 20 01:04:43.768: INFO: (14) /api/v1/namespaces/proxy-1451/services/proxy-service-zj2wc:portname1/proxy/: foo (200; 26.413414ms)
Feb 20 01:04:43.768: INFO: (14) /api/v1/namespaces/proxy-1451/services/proxy-service-zj2wc:portname2/proxy/: bar (200; 26.818694ms)
Feb 20 01:04:43.769: INFO: (14) /api/v1/namespaces/proxy-1451/services/http:proxy-service-zj2wc:portname2/proxy/: bar (200; 26.654218ms)
Feb 20 01:04:43.769: INFO: (14) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5/proxy/: test (200; 27.185646ms)
Feb 20 01:04:43.769: INFO: (14) /api/v1/namespaces/proxy-1451/services/https:proxy-service-zj2wc:tlsportname2/proxy/: tls qux (200; 27.354875ms)
Feb 20 01:04:43.769: INFO: (14) /api/v1/namespaces/proxy-1451/services/https:proxy-service-zj2wc:tlsportname1/proxy/: tls baz (200; 27.700758ms)
Feb 20 01:04:43.769: INFO: (14) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:460/proxy/: tls baz (200; 28.045389ms)
Feb 20 01:04:43.769: INFO: (14) /api/v1/namespaces/proxy-1451/services/http:proxy-service-zj2wc:portname1/proxy/: foo (200; 27.602315ms)
Feb 20 01:04:43.769: INFO: (14) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 27.728176ms)
Feb 20 01:04:43.770: INFO: (14) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:462/proxy/: tls qux (200; 27.670051ms)
Feb 20 01:04:43.774: INFO: (15) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 3.609152ms)
Feb 20 01:04:43.775: INFO: (15) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:160/proxy/: foo (200; 4.657014ms)
Feb 20 01:04:43.775: INFO: (15) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 4.616371ms)
Feb 20 01:04:43.776: INFO: (15) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:1080/proxy/: ... (200; 4.957146ms)
Feb 20 01:04:43.776: INFO: (15) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:460/proxy/: tls baz (200; 6.691385ms)
Feb 20 01:04:43.776: INFO: (15) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:1080/proxy/: test<... (200; 5.704189ms)
Feb 20 01:04:43.778: INFO: (15) /api/v1/namespaces/proxy-1451/services/proxy-service-zj2wc:portname2/proxy/: bar (200; 6.562809ms)
Feb 20 01:04:43.779: INFO: (15) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:462/proxy/: tls qux (200; 7.494193ms)
Feb 20 01:04:43.779: INFO: (15) /api/v1/namespaces/proxy-1451/services/http:proxy-service-zj2wc:portname2/proxy/: bar (200; 8.187172ms)
Feb 20 01:04:43.796: INFO: (15) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5/proxy/: test (200; 24.705081ms)
Feb 20 01:04:43.796: INFO: (15) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:160/proxy/: foo (200; 25.025593ms)
Feb 20 01:04:43.796: INFO: (15) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:443/proxy/: test (200; 8.662793ms)
Feb 20 01:04:43.807: INFO: (16) /api/v1/namespaces/proxy-1451/services/https:proxy-service-zj2wc:tlsportname1/proxy/: tls baz (200; 9.416168ms)
Feb 20 01:04:43.818: INFO: (16) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 19.496287ms)
Feb 20 01:04:43.819: INFO: (16) /api/v1/namespaces/proxy-1451/services/proxy-service-zj2wc:portname1/proxy/: foo (200; 19.909584ms)
Feb 20 01:04:43.818: INFO: (16) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:1080/proxy/: test<... (200; 20.303661ms)
Feb 20 01:04:43.819: INFO: (16) /api/v1/namespaces/proxy-1451/services/proxy-service-zj2wc:portname2/proxy/: bar (200; 19.904819ms)
Feb 20 01:04:43.819: INFO: (16) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:160/proxy/: foo (200; 19.91432ms)
Feb 20 01:04:43.819: INFO: (16) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:462/proxy/: tls qux (200; 19.946094ms)
Feb 20 01:04:43.820: INFO: (16) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:460/proxy/: tls baz (200; 20.885173ms)
Feb 20 01:04:43.820: INFO: (16) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 20.984442ms)
Feb 20 01:04:43.820: INFO: (16) /api/v1/namespaces/proxy-1451/services/https:proxy-service-zj2wc:tlsportname2/proxy/: tls qux (200; 21.231295ms)
Feb 20 01:04:43.823: INFO: (16) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:443/proxy/: ... (200; 24.391656ms)
Feb 20 01:04:43.824: INFO: (16) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:160/proxy/: foo (200; 25.640116ms)
Feb 20 01:04:43.824: INFO: (16) /api/v1/namespaces/proxy-1451/services/http:proxy-service-zj2wc:portname1/proxy/: foo (200; 25.49691ms)
Feb 20 01:04:43.824: INFO: (16) /api/v1/namespaces/proxy-1451/services/http:proxy-service-zj2wc:portname2/proxy/: bar (200; 25.377616ms)
Feb 20 01:04:43.865: INFO: (17) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5/proxy/: test (200; 41.072743ms)
Feb 20 01:04:43.866: INFO: (17) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 41.545962ms)
Feb 20 01:04:43.866: INFO: (17) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:460/proxy/: tls baz (200; 41.442782ms)
Feb 20 01:04:43.866: INFO: (17) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:462/proxy/: tls qux (200; 41.605942ms)
Feb 20 01:04:43.867: INFO: (17) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:1080/proxy/: test<... (200; 42.482905ms)
Feb 20 01:04:43.870: INFO: (17) /api/v1/namespaces/proxy-1451/services/proxy-service-zj2wc:portname2/proxy/: bar (200; 46.127495ms)
Feb 20 01:04:43.871: INFO: (17) /api/v1/namespaces/proxy-1451/services/https:proxy-service-zj2wc:tlsportname1/proxy/: tls baz (200; 46.871986ms)
Feb 20 01:04:43.871: INFO: (17) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:160/proxy/: foo (200; 46.855548ms)
Feb 20 01:04:43.872: INFO: (17) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:443/proxy/: ... (200; 51.517836ms)
Feb 20 01:04:43.876: INFO: (17) /api/v1/namespaces/proxy-1451/services/proxy-service-zj2wc:portname1/proxy/: foo (200; 51.39242ms)
Feb 20 01:04:43.876: INFO: (17) /api/v1/namespaces/proxy-1451/services/http:proxy-service-zj2wc:portname1/proxy/: foo (200; 51.748312ms)
Feb 20 01:04:43.881: INFO: (18) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 4.465136ms)
Feb 20 01:04:43.882: INFO: (18) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 5.554272ms)
Feb 20 01:04:43.889: INFO: (18) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:160/proxy/: foo (200; 12.362213ms)
Feb 20 01:04:43.889: INFO: (18) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:443/proxy/: test (200; 17.708948ms)
Feb 20 01:04:43.895: INFO: (18) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:1080/proxy/: ... (200; 18.175681ms)
Feb 20 01:04:43.895: INFO: (18) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:1080/proxy/: test<... (200; 17.819977ms)
Feb 20 01:04:43.895: INFO: (18) /api/v1/namespaces/proxy-1451/services/https:proxy-service-zj2wc:tlsportname2/proxy/: tls qux (200; 17.825256ms)
Feb 20 01:04:43.895: INFO: (18) /api/v1/namespaces/proxy-1451/services/proxy-service-zj2wc:portname2/proxy/: bar (200; 18.15052ms)
Feb 20 01:04:43.895: INFO: (18) /api/v1/namespaces/proxy-1451/services/http:proxy-service-zj2wc:portname2/proxy/: bar (200; 18.280695ms)
Feb 20 01:04:43.895: INFO: (18) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:160/proxy/: foo (200; 18.286621ms)
Feb 20 01:04:43.895: INFO: (18) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:460/proxy/: tls baz (200; 17.981571ms)
Feb 20 01:04:43.895: INFO: (18) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:462/proxy/: tls qux (200; 17.92752ms)
Feb 20 01:04:43.895: INFO: (18) /api/v1/namespaces/proxy-1451/services/https:proxy-service-zj2wc:tlsportname1/proxy/: tls baz (200; 18.114953ms)
Feb 20 01:04:43.897: INFO: (18) /api/v1/namespaces/proxy-1451/services/proxy-service-zj2wc:portname1/proxy/: foo (200; 19.61286ms)
Feb 20 01:04:43.906: INFO: (19) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:462/proxy/: tls qux (200; 9.164038ms)
Feb 20 01:04:43.906: INFO: (19) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 9.028425ms)
Feb 20 01:04:43.906: INFO: (19) /api/v1/namespaces/proxy-1451/services/proxy-service-zj2wc:portname1/proxy/: foo (200; 9.197674ms)
Feb 20 01:04:43.906: INFO: (19) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:162/proxy/: bar (200; 9.14245ms)
Feb 20 01:04:43.906: INFO: (19) /api/v1/namespaces/proxy-1451/pods/https:proxy-service-zj2wc-4fzr5:443/proxy/: test (200; 14.035976ms)
Feb 20 01:04:43.912: INFO: (19) /api/v1/namespaces/proxy-1451/pods/http:proxy-service-zj2wc-4fzr5:1080/proxy/: ... (200; 14.195527ms)
Feb 20 01:04:43.912: INFO: (19) /api/v1/namespaces/proxy-1451/services/http:proxy-service-zj2wc:portname2/proxy/: bar (200; 14.586856ms)
Feb 20 01:04:43.912: INFO: (19) /api/v1/namespaces/proxy-1451/pods/proxy-service-zj2wc-4fzr5:1080/proxy/: test<... (200; 14.672225ms)
Feb 20 01:04:43.914: INFO: (19) /api/v1/namespaces/proxy-1451/services/https:proxy-service-zj2wc:tlsportname1/proxy/: tls baz (200; 16.732012ms)
STEP: deleting ReplicationController proxy-service-zj2wc in namespace proxy-1451, will wait for the garbage collector to delete the pods
Feb 20 01:04:43.980: INFO: Deleting ReplicationController proxy-service-zj2wc took: 10.728922ms
Feb 20 01:04:44.281: INFO: Terminating ReplicationController proxy-service-zj2wc pods took: 300.727321ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:04:48.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-1451" for this suite.

• [SLOW TEST:17.840 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":280,"completed":218,"skipped":3647,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:04:48.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test override arguments
Feb 20 01:04:49.101: INFO: Waiting up to 5m0s for pod "client-containers-239f3e67-9779-4f09-b038-7feae7fbcfeb" in namespace "containers-9150" to be "success or failure"
Feb 20 01:04:49.105: INFO: Pod "client-containers-239f3e67-9779-4f09-b038-7feae7fbcfeb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.8088ms
Feb 20 01:04:51.115: INFO: Pod "client-containers-239f3e67-9779-4f09-b038-7feae7fbcfeb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014421315s
Feb 20 01:04:53.123: INFO: Pod "client-containers-239f3e67-9779-4f09-b038-7feae7fbcfeb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022459511s
Feb 20 01:04:55.128: INFO: Pod "client-containers-239f3e67-9779-4f09-b038-7feae7fbcfeb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027630394s
Feb 20 01:04:57.136: INFO: Pod "client-containers-239f3e67-9779-4f09-b038-7feae7fbcfeb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.035195626s
STEP: Saw pod success
Feb 20 01:04:57.136: INFO: Pod "client-containers-239f3e67-9779-4f09-b038-7feae7fbcfeb" satisfied condition "success or failure"
Feb 20 01:04:57.140: INFO: Trying to get logs from node jerma-node pod client-containers-239f3e67-9779-4f09-b038-7feae7fbcfeb container test-container: 
STEP: delete the pod
Feb 20 01:04:57.258: INFO: Waiting for pod client-containers-239f3e67-9779-4f09-b038-7feae7fbcfeb to disappear
Feb 20 01:04:57.272: INFO: Pod client-containers-239f3e67-9779-4f09-b038-7feae7fbcfeb no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:04:57.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9150" for this suite.

• [SLOW TEST:8.310 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":280,"completed":219,"skipped":3663,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:04:57.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Feb 20 01:05:08.187: INFO: Successfully updated pod "labelsupdate73e70372-e3a6-4ee0-b7c3-b9fbd229bcdf"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:05:10.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1653" for this suite.

• [SLOW TEST:12.942 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":280,"completed":220,"skipped":3681,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:05:10.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-1544, will wait for the garbage collector to delete the pods
Feb 20 01:05:22.471: INFO: Deleting Job.batch foo took: 26.137664ms
Feb 20 01:05:22.772: INFO: Terminating Job.batch foo pods took: 301.200099ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:06:12.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-1544" for this suite.

• [SLOW TEST:62.245 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":280,"completed":221,"skipped":3693,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:06:12.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test substitution in container's args
Feb 20 01:06:12.706: INFO: Waiting up to 5m0s for pod "var-expansion-158cf6ed-a60a-449d-884d-6ecb500f6114" in namespace "var-expansion-7945" to be "success or failure"
Feb 20 01:06:12.751: INFO: Pod "var-expansion-158cf6ed-a60a-449d-884d-6ecb500f6114": Phase="Pending", Reason="", readiness=false. Elapsed: 45.047943ms
Feb 20 01:06:14.763: INFO: Pod "var-expansion-158cf6ed-a60a-449d-884d-6ecb500f6114": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05703773s
Feb 20 01:06:16.770: INFO: Pod "var-expansion-158cf6ed-a60a-449d-884d-6ecb500f6114": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064199172s
Feb 20 01:06:18.778: INFO: Pod "var-expansion-158cf6ed-a60a-449d-884d-6ecb500f6114": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072289525s
Feb 20 01:06:20.791: INFO: Pod "var-expansion-158cf6ed-a60a-449d-884d-6ecb500f6114": Phase="Pending", Reason="", readiness=false. Elapsed: 8.085042766s
Feb 20 01:06:22.885: INFO: Pod "var-expansion-158cf6ed-a60a-449d-884d-6ecb500f6114": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.179227681s
STEP: Saw pod success
Feb 20 01:06:22.886: INFO: Pod "var-expansion-158cf6ed-a60a-449d-884d-6ecb500f6114" satisfied condition "success or failure"
Feb 20 01:06:22.902: INFO: Trying to get logs from node jerma-node pod var-expansion-158cf6ed-a60a-449d-884d-6ecb500f6114 container dapi-container: 
STEP: delete the pod
Feb 20 01:06:23.507: INFO: Waiting for pod var-expansion-158cf6ed-a60a-449d-884d-6ecb500f6114 to disappear
Feb 20 01:06:23.515: INFO: Pod var-expansion-158cf6ed-a60a-449d-884d-6ecb500f6114 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:06:23.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7945" for this suite.

• [SLOW TEST:11.035 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":280,"completed":222,"skipped":3710,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:06:23.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 20 01:06:24.800: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 20 01:06:26.812: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757584, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757584, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757584, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757584, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 01:06:28.841: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757584, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757584, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757584, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757584, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 01:06:30.817: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757584, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757584, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757584, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757584, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 20 01:06:33.922: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:06:33.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3781" for this suite.
STEP: Destroying namespace "webhook-3781-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:10.549 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":280,"completed":223,"skipped":3711,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:06:34.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 20 01:06:34.194: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2f4cbcdd-8775-4eda-aafd-0f93b097f6e7" in namespace "downward-api-8269" to be "success or failure"
Feb 20 01:06:34.218: INFO: Pod "downwardapi-volume-2f4cbcdd-8775-4eda-aafd-0f93b097f6e7": Phase="Pending", Reason="", readiness=false. Elapsed: 24.204431ms
Feb 20 01:06:36.228: INFO: Pod "downwardapi-volume-2f4cbcdd-8775-4eda-aafd-0f93b097f6e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033511564s
Feb 20 01:06:38.236: INFO: Pod "downwardapi-volume-2f4cbcdd-8775-4eda-aafd-0f93b097f6e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04190629s
Feb 20 01:06:40.248: INFO: Pod "downwardapi-volume-2f4cbcdd-8775-4eda-aafd-0f93b097f6e7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053615753s
Feb 20 01:06:42.254: INFO: Pod "downwardapi-volume-2f4cbcdd-8775-4eda-aafd-0f93b097f6e7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060241679s
Feb 20 01:06:44.259: INFO: Pod "downwardapi-volume-2f4cbcdd-8775-4eda-aafd-0f93b097f6e7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.065459379s
Feb 20 01:06:46.267: INFO: Pod "downwardapi-volume-2f4cbcdd-8775-4eda-aafd-0f93b097f6e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.073216568s
STEP: Saw pod success
Feb 20 01:06:46.267: INFO: Pod "downwardapi-volume-2f4cbcdd-8775-4eda-aafd-0f93b097f6e7" satisfied condition "success or failure"
Feb 20 01:06:46.272: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-2f4cbcdd-8775-4eda-aafd-0f93b097f6e7 container client-container: 
STEP: delete the pod
Feb 20 01:06:46.350: INFO: Waiting for pod downwardapi-volume-2f4cbcdd-8775-4eda-aafd-0f93b097f6e7 to disappear
Feb 20 01:06:46.358: INFO: Pod downwardapi-volume-2f4cbcdd-8775-4eda-aafd-0f93b097f6e7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:06:46.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8269" for this suite.

• [SLOW TEST:12.290 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":280,"completed":224,"skipped":3716,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:06:46.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: validating cluster-info
Feb 20 01:06:46.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb 20 01:06:48.875: INFO: stderr: ""
Feb 20 01:06:48.875: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:06:48.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5702" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":280,"completed":225,"skipped":3717,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:06:48.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-upd-6433a14b-7b7a-4036-bd62-72089d03038b
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:06:59.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4545" for this suite.

• [SLOW TEST:10.239 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":226,"skipped":3725,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:06:59.127: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-389c2344-7e06-4d7e-88d6-332519d63967
STEP: Creating a pod to test consume secrets
Feb 20 01:06:59.284: INFO: Waiting up to 5m0s for pod "pod-secrets-7311f0a2-4576-41e6-91c1-be5725636b65" in namespace "secrets-1260" to be "success or failure"
Feb 20 01:06:59.353: INFO: Pod "pod-secrets-7311f0a2-4576-41e6-91c1-be5725636b65": Phase="Pending", Reason="", readiness=false. Elapsed: 69.201509ms
Feb 20 01:07:01.409: INFO: Pod "pod-secrets-7311f0a2-4576-41e6-91c1-be5725636b65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124957649s
Feb 20 01:07:03.474: INFO: Pod "pod-secrets-7311f0a2-4576-41e6-91c1-be5725636b65": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190441197s
Feb 20 01:07:05.483: INFO: Pod "pod-secrets-7311f0a2-4576-41e6-91c1-be5725636b65": Phase="Pending", Reason="", readiness=false. Elapsed: 6.199034663s
Feb 20 01:07:07.493: INFO: Pod "pod-secrets-7311f0a2-4576-41e6-91c1-be5725636b65": Phase="Pending", Reason="", readiness=false. Elapsed: 8.208909041s
Feb 20 01:07:09.502: INFO: Pod "pod-secrets-7311f0a2-4576-41e6-91c1-be5725636b65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.218621258s
STEP: Saw pod success
Feb 20 01:07:09.503: INFO: Pod "pod-secrets-7311f0a2-4576-41e6-91c1-be5725636b65" satisfied condition "success or failure"
Feb 20 01:07:09.531: INFO: Trying to get logs from node jerma-node pod pod-secrets-7311f0a2-4576-41e6-91c1-be5725636b65 container secret-volume-test: 
STEP: delete the pod
Feb 20 01:07:09.614: INFO: Waiting for pod pod-secrets-7311f0a2-4576-41e6-91c1-be5725636b65 to disappear
Feb 20 01:07:09.626: INFO: Pod pod-secrets-7311f0a2-4576-41e6-91c1-be5725636b65 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:07:09.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1260" for this suite.

• [SLOW TEST:10.513 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":227,"skipped":3749,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:07:09.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-cbe8518d-5b9b-447c-b57e-297fc48e9b6f
STEP: Creating a pod to test consume secrets
Feb 20 01:07:09.929: INFO: Waiting up to 5m0s for pod "pod-secrets-3295726d-2669-41e6-9f63-3fd103c2da65" in namespace "secrets-5435" to be "success or failure"
Feb 20 01:07:09.934: INFO: Pod "pod-secrets-3295726d-2669-41e6-9f63-3fd103c2da65": Phase="Pending", Reason="", readiness=false. Elapsed: 4.566447ms
Feb 20 01:07:11.940: INFO: Pod "pod-secrets-3295726d-2669-41e6-9f63-3fd103c2da65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01043633s
Feb 20 01:07:13.946: INFO: Pod "pod-secrets-3295726d-2669-41e6-9f63-3fd103c2da65": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016348449s
Feb 20 01:07:15.954: INFO: Pod "pod-secrets-3295726d-2669-41e6-9f63-3fd103c2da65": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024065772s
Feb 20 01:07:17.964: INFO: Pod "pod-secrets-3295726d-2669-41e6-9f63-3fd103c2da65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.034513938s
STEP: Saw pod success
Feb 20 01:07:17.965: INFO: Pod "pod-secrets-3295726d-2669-41e6-9f63-3fd103c2da65" satisfied condition "success or failure"
Feb 20 01:07:17.968: INFO: Trying to get logs from node jerma-node pod pod-secrets-3295726d-2669-41e6-9f63-3fd103c2da65 container secret-volume-test: 
STEP: delete the pod
Feb 20 01:07:18.029: INFO: Waiting for pod pod-secrets-3295726d-2669-41e6-9f63-3fd103c2da65 to disappear
Feb 20 01:07:18.045: INFO: Pod pod-secrets-3295726d-2669-41e6-9f63-3fd103c2da65 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:07:18.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5435" for this suite.

• [SLOW TEST:8.479 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":228,"skipped":3750,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:07:18.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-upd-034398f5-5998-4e80-a515-277ccc04c050
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-034398f5-5998-4e80-a515-277ccc04c050
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:07:28.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9840" for this suite.

• [SLOW TEST:10.327 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":229,"skipped":3777,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:07:28.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:07:28.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1962" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":280,"completed":230,"skipped":3781,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:07:28.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:07:35.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-2528" for this suite.
STEP: Destroying namespace "nsdeletetest-2797" for this suite.
Feb 20 01:07:35.983: INFO: Namespace nsdeletetest-2797 was already deleted
STEP: Destroying namespace "nsdeletetest-6933" for this suite.

• [SLOW TEST:7.399 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":280,"completed":231,"skipped":3789,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:07:35.994: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:75
Feb 20 01:07:36.122: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the sample API server.
Feb 20 01:07:36.922: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Feb 20 01:07:39.187: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757656, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757656, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757657, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757656, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 01:07:41.235: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757656, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757656, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757657, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757656, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 01:07:43.201: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757656, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757656, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757657, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757656, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 01:07:45.196: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757656, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757656, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757657, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757656, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 01:07:48.127: INFO: Waited 923.019175ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:66
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:07:48.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-2084" for this suite.

• [SLOW TEST:12.807 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":280,"completed":232,"skipped":3789,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:07:48.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:07:59.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7506" for this suite.

• [SLOW TEST:10.311 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":233,"skipped":3811,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:07:59.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 20 01:08:00.076: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 20 01:08:02.097: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757680, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757680, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757680, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757680, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 01:08:04.104: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757680, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757680, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757680, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757680, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 01:08:06.105: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757680, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757680, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757680, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717757680, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 20 01:08:09.133: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:08:09.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3212" for this suite.
STEP: Destroying namespace "webhook-3212-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:10.355 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":280,"completed":234,"skipped":3825,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:08:09.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a replication controller
Feb 20 01:08:09.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4650'
Feb 20 01:08:10.024: INFO: stderr: ""
Feb 20 01:08:10.024: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 20 01:08:10.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4650'
Feb 20 01:08:10.236: INFO: stderr: ""
Feb 20 01:08:10.236: INFO: stdout: "update-demo-nautilus-4g2rb update-demo-nautilus-vrlzp "
Feb 20 01:08:10.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4g2rb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4650'
Feb 20 01:08:10.390: INFO: stderr: ""
Feb 20 01:08:10.390: INFO: stdout: ""
Feb 20 01:08:10.391: INFO: update-demo-nautilus-4g2rb is created but not running
Feb 20 01:08:15.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4650'
Feb 20 01:08:15.574: INFO: stderr: ""
Feb 20 01:08:15.574: INFO: stdout: "update-demo-nautilus-4g2rb update-demo-nautilus-vrlzp "
Feb 20 01:08:15.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4g2rb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4650'
Feb 20 01:08:15.696: INFO: stderr: ""
Feb 20 01:08:15.697: INFO: stdout: ""
Feb 20 01:08:15.697: INFO: update-demo-nautilus-4g2rb is created but not running
Feb 20 01:08:20.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4650'
Feb 20 01:08:20.936: INFO: stderr: ""
Feb 20 01:08:20.937: INFO: stdout: "update-demo-nautilus-4g2rb update-demo-nautilus-vrlzp "
Feb 20 01:08:20.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4g2rb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4650'
Feb 20 01:08:21.044: INFO: stderr: ""
Feb 20 01:08:21.045: INFO: stdout: ""
Feb 20 01:08:21.045: INFO: update-demo-nautilus-4g2rb is created but not running
Feb 20 01:08:26.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4650'
Feb 20 01:08:26.189: INFO: stderr: ""
Feb 20 01:08:26.189: INFO: stdout: "update-demo-nautilus-4g2rb update-demo-nautilus-vrlzp "
Feb 20 01:08:26.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4g2rb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4650'
Feb 20 01:08:26.355: INFO: stderr: ""
Feb 20 01:08:26.355: INFO: stdout: "true"
Feb 20 01:08:26.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4g2rb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4650'
Feb 20 01:08:26.611: INFO: stderr: ""
Feb 20 01:08:26.611: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 20 01:08:26.611: INFO: validating pod update-demo-nautilus-4g2rb
Feb 20 01:08:26.641: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 20 01:08:26.642: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 20 01:08:26.642: INFO: update-demo-nautilus-4g2rb is verified up and running
Feb 20 01:08:26.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vrlzp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4650'
Feb 20 01:08:26.748: INFO: stderr: ""
Feb 20 01:08:26.749: INFO: stdout: "true"
Feb 20 01:08:26.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vrlzp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4650'
Feb 20 01:08:26.834: INFO: stderr: ""
Feb 20 01:08:26.834: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 20 01:08:26.834: INFO: validating pod update-demo-nautilus-vrlzp
Feb 20 01:08:26.899: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 20 01:08:26.899: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 20 01:08:26.900: INFO: update-demo-nautilus-vrlzp is verified up and running
STEP: scaling down the replication controller
Feb 20 01:08:26.902: INFO: scanned /root for discovery docs: 
Feb 20 01:08:26.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4650'
Feb 20 01:08:28.133: INFO: stderr: ""
Feb 20 01:08:28.133: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 20 01:08:28.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4650'
Feb 20 01:08:28.280: INFO: stderr: ""
Feb 20 01:08:28.280: INFO: stdout: "update-demo-nautilus-4g2rb update-demo-nautilus-vrlzp "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 20 01:08:33.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4650'
Feb 20 01:08:33.420: INFO: stderr: ""
Feb 20 01:08:33.420: INFO: stdout: "update-demo-nautilus-4g2rb update-demo-nautilus-vrlzp "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 20 01:08:38.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4650'
Feb 20 01:08:38.622: INFO: stderr: ""
Feb 20 01:08:38.623: INFO: stdout: "update-demo-nautilus-4g2rb update-demo-nautilus-vrlzp "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 20 01:08:43.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4650'
Feb 20 01:08:43.791: INFO: stderr: ""
Feb 20 01:08:43.791: INFO: stdout: "update-demo-nautilus-4g2rb "
Feb 20 01:08:43.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4g2rb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4650'
Feb 20 01:08:43.912: INFO: stderr: ""
Feb 20 01:08:43.913: INFO: stdout: "true"
Feb 20 01:08:43.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4g2rb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4650'
Feb 20 01:08:44.051: INFO: stderr: ""
Feb 20 01:08:44.051: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 20 01:08:44.051: INFO: validating pod update-demo-nautilus-4g2rb
Feb 20 01:08:44.056: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 20 01:08:44.056: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 20 01:08:44.056: INFO: update-demo-nautilus-4g2rb is verified up and running
STEP: scaling up the replication controller
Feb 20 01:08:44.061: INFO: scanned /root for discovery docs: 
Feb 20 01:08:44.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4650'
Feb 20 01:08:45.229: INFO: stderr: ""
Feb 20 01:08:45.229: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 20 01:08:45.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4650'
Feb 20 01:08:45.325: INFO: stderr: ""
Feb 20 01:08:45.325: INFO: stdout: "update-demo-nautilus-4g2rb update-demo-nautilus-fpgwr "
Feb 20 01:08:45.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4g2rb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4650'
Feb 20 01:08:45.424: INFO: stderr: ""
Feb 20 01:08:45.424: INFO: stdout: "true"
Feb 20 01:08:45.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4g2rb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4650'
Feb 20 01:08:45.507: INFO: stderr: ""
Feb 20 01:08:45.507: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 20 01:08:45.507: INFO: validating pod update-demo-nautilus-4g2rb
Feb 20 01:08:45.511: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 20 01:08:45.511: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 20 01:08:45.511: INFO: update-demo-nautilus-4g2rb is verified up and running
Feb 20 01:08:45.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fpgwr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4650'
Feb 20 01:08:45.593: INFO: stderr: ""
Feb 20 01:08:45.593: INFO: stdout: ""
Feb 20 01:08:45.593: INFO: update-demo-nautilus-fpgwr is created but not running
Feb 20 01:08:50.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4650'
Feb 20 01:08:50.717: INFO: stderr: ""
Feb 20 01:08:50.717: INFO: stdout: "update-demo-nautilus-4g2rb update-demo-nautilus-fpgwr "
Feb 20 01:08:50.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4g2rb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4650'
Feb 20 01:08:50.834: INFO: stderr: ""
Feb 20 01:08:50.834: INFO: stdout: "true"
Feb 20 01:08:50.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4g2rb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4650'
Feb 20 01:08:50.937: INFO: stderr: ""
Feb 20 01:08:50.937: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 20 01:08:50.937: INFO: validating pod update-demo-nautilus-4g2rb
Feb 20 01:08:50.942: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 20 01:08:50.942: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 20 01:08:50.942: INFO: update-demo-nautilus-4g2rb is verified up and running
Feb 20 01:08:50.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fpgwr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4650'
Feb 20 01:08:51.036: INFO: stderr: ""
Feb 20 01:08:51.036: INFO: stdout: ""
Feb 20 01:08:51.036: INFO: update-demo-nautilus-fpgwr is created but not running
Feb 20 01:08:56.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4650'
Feb 20 01:08:56.270: INFO: stderr: ""
Feb 20 01:08:56.270: INFO: stdout: "update-demo-nautilus-4g2rb update-demo-nautilus-fpgwr "
Feb 20 01:08:56.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4g2rb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4650'
Feb 20 01:08:56.387: INFO: stderr: ""
Feb 20 01:08:56.387: INFO: stdout: "true"
Feb 20 01:08:56.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4g2rb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4650'
Feb 20 01:08:56.482: INFO: stderr: ""
Feb 20 01:08:56.482: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 20 01:08:56.482: INFO: validating pod update-demo-nautilus-4g2rb
Feb 20 01:08:56.487: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 20 01:08:56.487: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 20 01:08:56.487: INFO: update-demo-nautilus-4g2rb is verified up and running
Feb 20 01:08:56.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fpgwr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4650'
Feb 20 01:08:56.587: INFO: stderr: ""
Feb 20 01:08:56.588: INFO: stdout: "true"
Feb 20 01:08:56.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fpgwr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4650'
Feb 20 01:08:56.715: INFO: stderr: ""
Feb 20 01:08:56.716: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 20 01:08:56.716: INFO: validating pod update-demo-nautilus-fpgwr
Feb 20 01:08:56.753: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 20 01:08:56.754: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 20 01:08:56.754: INFO: update-demo-nautilus-fpgwr is verified up and running
STEP: using delete to clean up resources
Feb 20 01:08:56.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4650'
Feb 20 01:08:56.943: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 20 01:08:56.944: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 20 01:08:56.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4650'
Feb 20 01:08:57.143: INFO: stderr: "No resources found in kubectl-4650 namespace.\n"
Feb 20 01:08:57.143: INFO: stdout: ""
Feb 20 01:08:57.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4650 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 20 01:08:57.294: INFO: stderr: ""
Feb 20 01:08:57.294: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:08:57.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4650" for this suite.

• [SLOW TEST:47.847 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":280,"completed":235,"skipped":3827,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:08:57.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Feb 20 01:08:58.137: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:09:13.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4732" for this suite.

• [SLOW TEST:16.360 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":280,"completed":236,"skipped":3847,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:09:13.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 20 01:09:13.771: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:09:14.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8894" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":280,"completed":237,"skipped":3848,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:09:14.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 20 01:09:15.117: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c655b7b1-8a5b-4538-ae3a-279c4ff5a998" in namespace "projected-2312" to be "success or failure"
Feb 20 01:09:15.124: INFO: Pod "downwardapi-volume-c655b7b1-8a5b-4538-ae3a-279c4ff5a998": Phase="Pending", Reason="", readiness=false. Elapsed: 7.113164ms
Feb 20 01:09:17.130: INFO: Pod "downwardapi-volume-c655b7b1-8a5b-4538-ae3a-279c4ff5a998": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012867349s
Feb 20 01:09:19.134: INFO: Pod "downwardapi-volume-c655b7b1-8a5b-4538-ae3a-279c4ff5a998": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016669359s
Feb 20 01:09:21.141: INFO: Pod "downwardapi-volume-c655b7b1-8a5b-4538-ae3a-279c4ff5a998": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024309705s
Feb 20 01:09:23.148: INFO: Pod "downwardapi-volume-c655b7b1-8a5b-4538-ae3a-279c4ff5a998": Phase="Pending", Reason="", readiness=false. Elapsed: 8.030740202s
Feb 20 01:09:25.153: INFO: Pod "downwardapi-volume-c655b7b1-8a5b-4538-ae3a-279c4ff5a998": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.036192018s
STEP: Saw pod success
Feb 20 01:09:25.153: INFO: Pod "downwardapi-volume-c655b7b1-8a5b-4538-ae3a-279c4ff5a998" satisfied condition "success or failure"
Feb 20 01:09:25.157: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-c655b7b1-8a5b-4538-ae3a-279c4ff5a998 container client-container: 
STEP: delete the pod
Feb 20 01:09:25.389: INFO: Waiting for pod downwardapi-volume-c655b7b1-8a5b-4538-ae3a-279c4ff5a998 to disappear
Feb 20 01:09:25.395: INFO: Pod downwardapi-volume-c655b7b1-8a5b-4538-ae3a-279c4ff5a998 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:09:25.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2312" for this suite.

• [SLOW TEST:10.515 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":238,"skipped":3860,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:09:25.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb 20 01:09:33.388: INFO: 0 pods remaining
Feb 20 01:09:33.388: INFO: 0 pods has nil DeletionTimestamp
Feb 20 01:09:33.388: INFO: 
STEP: Gathering metrics
W0220 01:09:34.495594       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 20 01:09:34.495: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:09:34.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4581" for this suite.

• [SLOW TEST:9.328 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":280,"completed":239,"skipped":3862,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:09:34.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating pod
Feb 20 01:09:55.186: INFO: Pod pod-hostip-a80e8a1e-ba19-4b33-a46d-296469be17a3 has hostIP: 10.96.2.250
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:09:55.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7231" for this suite.

• [SLOW TEST:20.460 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":280,"completed":240,"skipped":3876,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:09:55.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb 20 01:09:55.480: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-3164 /api/v1/namespaces/watch-3164/configmaps/e2e-watch-test-resource-version 925d9c8f-65d1-4a08-b231-650f7e5c2ac5 9514996 0 2020-02-20 01:09:55 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 20 01:09:55.480: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-3164 /api/v1/namespaces/watch-3164/configmaps/e2e-watch-test-resource-version 925d9c8f-65d1-4a08-b231-650f7e5c2ac5 9514997 0 2020-02-20 01:09:55 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:09:55.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3164" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":280,"completed":241,"skipped":3905,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:09:55.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:10:12.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4368" for this suite.

• [SLOW TEST:16.534 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":280,"completed":242,"skipped":3919,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:10:12.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 20 01:10:12.379: INFO: Waiting up to 5m0s for pod "downwardapi-volume-97a40997-e39a-47ac-9a32-28c7933dde7e" in namespace "downward-api-7744" to be "success or failure"
Feb 20 01:10:12.455: INFO: Pod "downwardapi-volume-97a40997-e39a-47ac-9a32-28c7933dde7e": Phase="Pending", Reason="", readiness=false. Elapsed: 75.622391ms
Feb 20 01:10:14.467: INFO: Pod "downwardapi-volume-97a40997-e39a-47ac-9a32-28c7933dde7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088312047s
Feb 20 01:10:16.629: INFO: Pod "downwardapi-volume-97a40997-e39a-47ac-9a32-28c7933dde7e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.250067577s
Feb 20 01:10:18.639: INFO: Pod "downwardapi-volume-97a40997-e39a-47ac-9a32-28c7933dde7e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.25971388s
Feb 20 01:10:20.717: INFO: Pod "downwardapi-volume-97a40997-e39a-47ac-9a32-28c7933dde7e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.338337579s
Feb 20 01:10:22.728: INFO: Pod "downwardapi-volume-97a40997-e39a-47ac-9a32-28c7933dde7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.348503182s
STEP: Saw pod success
Feb 20 01:10:22.728: INFO: Pod "downwardapi-volume-97a40997-e39a-47ac-9a32-28c7933dde7e" satisfied condition "success or failure"
Feb 20 01:10:22.733: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-97a40997-e39a-47ac-9a32-28c7933dde7e container client-container: 
STEP: delete the pod
Feb 20 01:10:22.913: INFO: Waiting for pod downwardapi-volume-97a40997-e39a-47ac-9a32-28c7933dde7e to disappear
Feb 20 01:10:22.921: INFO: Pod downwardapi-volume-97a40997-e39a-47ac-9a32-28c7933dde7e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:10:22.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7744" for this suite.

• [SLOW TEST:10.915 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":243,"skipped":3920,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:10:22.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:10:59.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9637" for this suite.

• [SLOW TEST:36.250 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":280,"completed":244,"skipped":3937,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:10:59.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 20 01:11:07.904: INFO: Successfully updated pod "pod-update-activedeadlineseconds-b8010318-019f-47eb-93d2-c2d923b987a8"
Feb 20 01:11:07.905: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-b8010318-019f-47eb-93d2-c2d923b987a8" in namespace "pods-7823" to be "terminated due to deadline exceeded"
Feb 20 01:11:07.915: INFO: Pod "pod-update-activedeadlineseconds-b8010318-019f-47eb-93d2-c2d923b987a8": Phase="Running", Reason="", readiness=true. Elapsed: 10.854618ms
Feb 20 01:11:09.925: INFO: Pod "pod-update-activedeadlineseconds-b8010318-019f-47eb-93d2-c2d923b987a8": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.02028803s
Feb 20 01:11:09.925: INFO: Pod "pod-update-activedeadlineseconds-b8010318-019f-47eb-93d2-c2d923b987a8" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:11:09.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7823" for this suite.

• [SLOW TEST:10.736 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":280,"completed":245,"skipped":3951,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:11:09.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating Agnhost RC
Feb 20 01:11:10.088: INFO: namespace kubectl-3634
Feb 20 01:11:10.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3634'
Feb 20 01:11:10.527: INFO: stderr: ""
Feb 20 01:11:10.527: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Feb 20 01:11:11.538: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 20 01:11:11.538: INFO: Found 0 / 1
Feb 20 01:11:12.538: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 20 01:11:12.539: INFO: Found 0 / 1
Feb 20 01:11:13.535: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 20 01:11:13.535: INFO: Found 0 / 1
Feb 20 01:11:14.536: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 20 01:11:14.537: INFO: Found 0 / 1
Feb 20 01:11:15.582: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 20 01:11:15.582: INFO: Found 0 / 1
Feb 20 01:11:16.537: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 20 01:11:16.537: INFO: Found 0 / 1
Feb 20 01:11:17.534: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 20 01:11:17.534: INFO: Found 0 / 1
Feb 20 01:11:18.540: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 20 01:11:18.541: INFO: Found 1 / 1
Feb 20 01:11:18.541: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 20 01:11:18.547: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 20 01:11:18.547: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 20 01:11:18.547: INFO: wait on agnhost-master startup in kubectl-3634 
Feb 20 01:11:18.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-zpfdk agnhost-master --namespace=kubectl-3634'
Feb 20 01:11:18.687: INFO: stderr: ""
Feb 20 01:11:18.687: INFO: stdout: "Paused\n"
STEP: exposing RC
Feb 20 01:11:18.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3634'
Feb 20 01:11:18.816: INFO: stderr: ""
Feb 20 01:11:18.816: INFO: stdout: "service/rm2 exposed\n"
Feb 20 01:11:18.825: INFO: Service rm2 in namespace kubectl-3634 found.
STEP: exposing service
Feb 20 01:11:20.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3634'
Feb 20 01:11:21.018: INFO: stderr: ""
Feb 20 01:11:21.018: INFO: stdout: "service/rm3 exposed\n"
Feb 20 01:11:21.038: INFO: Service rm3 in namespace kubectl-3634 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:11:23.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3634" for this suite.

• [SLOW TEST:13.125 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1297
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":280,"completed":246,"skipped":3986,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:11:23.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 20 01:11:23.180: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d15e214d-943b-44b6-95ab-14df1f70ef58" in namespace "downward-api-3617" to be "success or failure"
Feb 20 01:11:23.211: INFO: Pod "downwardapi-volume-d15e214d-943b-44b6-95ab-14df1f70ef58": Phase="Pending", Reason="", readiness=false. Elapsed: 30.67687ms
Feb 20 01:11:25.219: INFO: Pod "downwardapi-volume-d15e214d-943b-44b6-95ab-14df1f70ef58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038397132s
Feb 20 01:11:27.226: INFO: Pod "downwardapi-volume-d15e214d-943b-44b6-95ab-14df1f70ef58": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04579466s
Feb 20 01:11:29.258: INFO: Pod "downwardapi-volume-d15e214d-943b-44b6-95ab-14df1f70ef58": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07711695s
Feb 20 01:11:31.263: INFO: Pod "downwardapi-volume-d15e214d-943b-44b6-95ab-14df1f70ef58": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082871335s
Feb 20 01:11:33.272: INFO: Pod "downwardapi-volume-d15e214d-943b-44b6-95ab-14df1f70ef58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.091365951s
STEP: Saw pod success
Feb 20 01:11:33.272: INFO: Pod "downwardapi-volume-d15e214d-943b-44b6-95ab-14df1f70ef58" satisfied condition "success or failure"
Feb 20 01:11:33.324: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-d15e214d-943b-44b6-95ab-14df1f70ef58 container client-container: 
STEP: delete the pod
Feb 20 01:11:33.451: INFO: Waiting for pod downwardapi-volume-d15e214d-943b-44b6-95ab-14df1f70ef58 to disappear
Feb 20 01:11:33.462: INFO: Pod downwardapi-volume-d15e214d-943b-44b6-95ab-14df1f70ef58 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:11:33.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3617" for this suite.

• [SLOW TEST:10.476 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":280,"completed":247,"skipped":3987,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:11:33.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 20 01:11:49.732: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 20 01:11:49.743: INFO: Pod pod-with-poststart-http-hook still exists
Feb 20 01:11:51.744: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 20 01:11:51.752: INFO: Pod pod-with-poststart-http-hook still exists
Feb 20 01:11:53.744: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 20 01:11:53.751: INFO: Pod pod-with-poststart-http-hook still exists
Feb 20 01:11:55.744: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 20 01:11:55.748: INFO: Pod pod-with-poststart-http-hook still exists
Feb 20 01:11:57.743: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 20 01:11:57.783: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:11:57.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-455" for this suite.

• [SLOW TEST:24.260 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":280,"completed":248,"skipped":4010,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:11:57.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 20 01:11:57.965: INFO: (0) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 36.003623ms)
Feb 20 01:11:57.972: INFO: (1) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.779744ms)
Feb 20 01:11:57.978: INFO: (2) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.398025ms)
Feb 20 01:11:57.987: INFO: (3) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.840659ms)
Feb 20 01:11:57.993: INFO: (4) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.410665ms)
Feb 20 01:11:57.999: INFO: (5) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.802035ms)
Feb 20 01:11:58.004: INFO: (6) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.354127ms)
Feb 20 01:11:58.009: INFO: (7) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.802691ms)
Feb 20 01:11:58.013: INFO: (8) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.381856ms)
Feb 20 01:11:58.019: INFO: (9) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.805958ms)
Feb 20 01:11:58.023: INFO: (10) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.678675ms)
Feb 20 01:11:58.026: INFO: (11) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.442922ms)
Feb 20 01:11:58.030: INFO: (12) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.982974ms)
Feb 20 01:11:58.034: INFO: (13) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.886217ms)
Feb 20 01:11:58.038: INFO: (14) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.35695ms)
Feb 20 01:11:58.041: INFO: (15) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.899852ms)
Feb 20 01:11:58.046: INFO: (16) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.98092ms)
Feb 20 01:11:58.050: INFO: (17) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.391892ms)
Feb 20 01:11:58.054: INFO: (18) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.54558ms)
Feb 20 01:11:58.057: INFO: (19) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.46409ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:11:58.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-3656" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":280,"completed":249,"skipped":4035,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:11:58.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Feb 20 01:11:58.205: INFO: Waiting up to 5m0s for pod "downward-api-c152cecb-2175-41ee-b9b8-02d078215c99" in namespace "downward-api-3283" to be "success or failure"
Feb 20 01:11:58.214: INFO: Pod "downward-api-c152cecb-2175-41ee-b9b8-02d078215c99": Phase="Pending", Reason="", readiness=false. Elapsed: 9.28522ms
Feb 20 01:12:00.221: INFO: Pod "downward-api-c152cecb-2175-41ee-b9b8-02d078215c99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016140425s
Feb 20 01:12:02.530: INFO: Pod "downward-api-c152cecb-2175-41ee-b9b8-02d078215c99": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324756767s
Feb 20 01:12:04.540: INFO: Pod "downward-api-c152cecb-2175-41ee-b9b8-02d078215c99": Phase="Pending", Reason="", readiness=false. Elapsed: 6.335210584s
Feb 20 01:12:06.552: INFO: Pod "downward-api-c152cecb-2175-41ee-b9b8-02d078215c99": Phase="Pending", Reason="", readiness=false. Elapsed: 8.346954454s
Feb 20 01:12:08.569: INFO: Pod "downward-api-c152cecb-2175-41ee-b9b8-02d078215c99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.364361972s
STEP: Saw pod success
Feb 20 01:12:08.570: INFO: Pod "downward-api-c152cecb-2175-41ee-b9b8-02d078215c99" satisfied condition "success or failure"
Feb 20 01:12:08.573: INFO: Trying to get logs from node jerma-node pod downward-api-c152cecb-2175-41ee-b9b8-02d078215c99 container dapi-container: 
STEP: delete the pod
Feb 20 01:12:08.619: INFO: Waiting for pod downward-api-c152cecb-2175-41ee-b9b8-02d078215c99 to disappear
Feb 20 01:12:08.633: INFO: Pod downward-api-c152cecb-2175-41ee-b9b8-02d078215c99 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:12:08.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3283" for this suite.

• [SLOW TEST:10.667 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":280,"completed":250,"skipped":4035,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:12:08.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 20 01:12:08.980: INFO: Waiting up to 5m0s for pod "downwardapi-volume-38b8af0e-5e8d-440a-b2dc-ed0f97cc5cb4" in namespace "downward-api-6172" to be "success or failure"
Feb 20 01:12:09.183: INFO: Pod "downwardapi-volume-38b8af0e-5e8d-440a-b2dc-ed0f97cc5cb4": Phase="Pending", Reason="", readiness=false. Elapsed: 203.636212ms
Feb 20 01:12:11.198: INFO: Pod "downwardapi-volume-38b8af0e-5e8d-440a-b2dc-ed0f97cc5cb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218081306s
Feb 20 01:12:13.206: INFO: Pod "downwardapi-volume-38b8af0e-5e8d-440a-b2dc-ed0f97cc5cb4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.226511282s
Feb 20 01:12:15.214: INFO: Pod "downwardapi-volume-38b8af0e-5e8d-440a-b2dc-ed0f97cc5cb4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.23437761s
Feb 20 01:12:17.219: INFO: Pod "downwardapi-volume-38b8af0e-5e8d-440a-b2dc-ed0f97cc5cb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.239498692s
STEP: Saw pod success
Feb 20 01:12:17.219: INFO: Pod "downwardapi-volume-38b8af0e-5e8d-440a-b2dc-ed0f97cc5cb4" satisfied condition "success or failure"
Feb 20 01:12:17.222: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-38b8af0e-5e8d-440a-b2dc-ed0f97cc5cb4 container client-container: 
STEP: delete the pod
Feb 20 01:12:17.276: INFO: Waiting for pod downwardapi-volume-38b8af0e-5e8d-440a-b2dc-ed0f97cc5cb4 to disappear
Feb 20 01:12:17.283: INFO: Pod downwardapi-volume-38b8af0e-5e8d-440a-b2dc-ed0f97cc5cb4 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:12:17.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6172" for this suite.

• [SLOW TEST:8.558 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":280,"completed":251,"skipped":4045,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:12:17.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Feb 20 01:12:17.444: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 20 01:12:17.567: INFO: Waiting for terminating namespaces to be deleted...
Feb 20 01:12:17.571: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 20 01:12:17.580: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 20 01:12:17.580: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 20 01:12:17.580: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 20 01:12:17.580: INFO: 	Container weave ready: true, restart count 1
Feb 20 01:12:17.580: INFO: 	Container weave-npc ready: true, restart count 0
Feb 20 01:12:17.580: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 20 01:12:17.616: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 20 01:12:17.616: INFO: 	Container coredns ready: true, restart count 0
Feb 20 01:12:17.616: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 20 01:12:17.616: INFO: 	Container coredns ready: true, restart count 0
Feb 20 01:12:17.616: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 20 01:12:17.616: INFO: 	Container kube-controller-manager ready: true, restart count 14
Feb 20 01:12:17.616: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 20 01:12:17.616: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 20 01:12:17.616: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 20 01:12:17.616: INFO: 	Container weave ready: true, restart count 0
Feb 20 01:12:17.616: INFO: 	Container weave-npc ready: true, restart count 0
Feb 20 01:12:17.616: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 20 01:12:17.616: INFO: 	Container kube-scheduler ready: true, restart count 18
Feb 20 01:12:17.616: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 20 01:12:17.616: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 20 01:12:17.616: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 20 01:12:17.616: INFO: 	Container etcd ready: true, restart count 1
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-cc934d31-cdfa-4874-bffe-b1c88117f0fb 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-cc934d31-cdfa-4874-bffe-b1c88117f0fb off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-cc934d31-cdfa-4874-bffe-b1c88117f0fb
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:17:34.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3772" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:316.857 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":280,"completed":252,"skipped":4059,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:17:34.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 20 01:17:34.329: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-2a097144-c266-4091-9f20-87781b4fe991" in namespace "security-context-test-5951" to be "success or failure"
Feb 20 01:17:34.350: INFO: Pod "alpine-nnp-false-2a097144-c266-4091-9f20-87781b4fe991": Phase="Pending", Reason="", readiness=false. Elapsed: 21.382511ms
Feb 20 01:17:36.375: INFO: Pod "alpine-nnp-false-2a097144-c266-4091-9f20-87781b4fe991": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04655556s
Feb 20 01:17:38.380: INFO: Pod "alpine-nnp-false-2a097144-c266-4091-9f20-87781b4fe991": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05141855s
Feb 20 01:17:40.391: INFO: Pod "alpine-nnp-false-2a097144-c266-4091-9f20-87781b4fe991": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062460952s
Feb 20 01:17:42.397: INFO: Pod "alpine-nnp-false-2a097144-c266-4091-9f20-87781b4fe991": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068555319s
Feb 20 01:17:44.681: INFO: Pod "alpine-nnp-false-2a097144-c266-4091-9f20-87781b4fe991": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.352613537s
Feb 20 01:17:44.682: INFO: Pod "alpine-nnp-false-2a097144-c266-4091-9f20-87781b4fe991" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:17:44.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5951" for this suite.

• [SLOW TEST:10.706 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when creating containers with AllowPrivilegeEscalation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":253,"skipped":4073,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:17:44.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Feb 20 01:17:53.619: INFO: Successfully updated pod "annotationupdate0ea21d8f-6b09-400e-b3c5-879e09a43522"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:17:55.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6063" for this suite.

• [SLOW TEST:10.812 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":280,"completed":254,"skipped":4076,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:17:55.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 20 01:17:55.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Feb 20 01:17:58.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2431 create -f -'
Feb 20 01:18:00.895: INFO: stderr: ""
Feb 20 01:18:00.895: INFO: stdout: "e2e-test-crd-publish-openapi-6212-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Feb 20 01:18:00.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2431 delete e2e-test-crd-publish-openapi-6212-crds test-cr'
Feb 20 01:18:01.043: INFO: stderr: ""
Feb 20 01:18:01.044: INFO: stdout: "e2e-test-crd-publish-openapi-6212-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Feb 20 01:18:01.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2431 apply -f -'
Feb 20 01:18:01.412: INFO: stderr: ""
Feb 20 01:18:01.412: INFO: stdout: "e2e-test-crd-publish-openapi-6212-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Feb 20 01:18:01.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2431 delete e2e-test-crd-publish-openapi-6212-crds test-cr'
Feb 20 01:18:01.549: INFO: stderr: ""
Feb 20 01:18:01.549: INFO: stdout: "e2e-test-crd-publish-openapi-6212-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Feb 20 01:18:01.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6212-crds'
Feb 20 01:18:02.116: INFO: stderr: ""
Feb 20 01:18:02.116: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6212-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:18:04.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2431" for this suite.

• [SLOW TEST:9.303 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":280,"completed":255,"skipped":4080,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:18:04.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb 20 01:18:05.120: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-796 /api/v1/namespaces/watch-796/configmaps/e2e-watch-test-label-changed ae89f7b7-95cf-430c-9017-7dc11502e169 9516610 0 2020-02-20 01:18:05 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 20 01:18:05.120: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-796 /api/v1/namespaces/watch-796/configmaps/e2e-watch-test-label-changed ae89f7b7-95cf-430c-9017-7dc11502e169 9516611 0 2020-02-20 01:18:05 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 20 01:18:05.120: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-796 /api/v1/namespaces/watch-796/configmaps/e2e-watch-test-label-changed ae89f7b7-95cf-430c-9017-7dc11502e169 9516612 0 2020-02-20 01:18:05 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb 20 01:18:15.167: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-796 /api/v1/namespaces/watch-796/configmaps/e2e-watch-test-label-changed ae89f7b7-95cf-430c-9017-7dc11502e169 9516644 0 2020-02-20 01:18:05 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 20 01:18:15.168: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-796 /api/v1/namespaces/watch-796/configmaps/e2e-watch-test-label-changed ae89f7b7-95cf-430c-9017-7dc11502e169 9516645 0 2020-02-20 01:18:05 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 20 01:18:15.168: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-796 /api/v1/namespaces/watch-796/configmaps/e2e-watch-test-label-changed ae89f7b7-95cf-430c-9017-7dc11502e169 9516646 0 2020-02-20 01:18:05 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:18:15.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-796" for this suite.

• [SLOW TEST:10.233 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":280,"completed":256,"skipped":4104,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:18:15.214: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-configmap-rcw8
STEP: Creating a pod to test atomic-volume-subpath
Feb 20 01:18:16.099: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-rcw8" in namespace "subpath-9041" to be "success or failure"
Feb 20 01:18:16.126: INFO: Pod "pod-subpath-test-configmap-rcw8": Phase="Pending", Reason="", readiness=false. Elapsed: 26.880495ms
Feb 20 01:18:18.144: INFO: Pod "pod-subpath-test-configmap-rcw8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044770213s
Feb 20 01:18:20.150: INFO: Pod "pod-subpath-test-configmap-rcw8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051215702s
Feb 20 01:18:22.158: INFO: Pod "pod-subpath-test-configmap-rcw8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059153165s
Feb 20 01:18:24.174: INFO: Pod "pod-subpath-test-configmap-rcw8": Phase="Running", Reason="", readiness=true. Elapsed: 8.075262512s
Feb 20 01:18:26.183: INFO: Pod "pod-subpath-test-configmap-rcw8": Phase="Running", Reason="", readiness=true. Elapsed: 10.084214917s
Feb 20 01:18:28.190: INFO: Pod "pod-subpath-test-configmap-rcw8": Phase="Running", Reason="", readiness=true. Elapsed: 12.091396825s
Feb 20 01:18:30.196: INFO: Pod "pod-subpath-test-configmap-rcw8": Phase="Running", Reason="", readiness=true. Elapsed: 14.096814789s
Feb 20 01:18:32.204: INFO: Pod "pod-subpath-test-configmap-rcw8": Phase="Running", Reason="", readiness=true. Elapsed: 16.10441274s
Feb 20 01:18:34.209: INFO: Pod "pod-subpath-test-configmap-rcw8": Phase="Running", Reason="", readiness=true. Elapsed: 18.110243763s
Feb 20 01:18:36.216: INFO: Pod "pod-subpath-test-configmap-rcw8": Phase="Running", Reason="", readiness=true. Elapsed: 20.117031393s
Feb 20 01:18:38.224: INFO: Pod "pod-subpath-test-configmap-rcw8": Phase="Running", Reason="", readiness=true. Elapsed: 22.124749622s
Feb 20 01:18:40.232: INFO: Pod "pod-subpath-test-configmap-rcw8": Phase="Running", Reason="", readiness=true. Elapsed: 24.13306889s
Feb 20 01:18:42.237: INFO: Pod "pod-subpath-test-configmap-rcw8": Phase="Running", Reason="", readiness=true. Elapsed: 26.138195073s
Feb 20 01:18:44.246: INFO: Pod "pod-subpath-test-configmap-rcw8": Phase="Running", Reason="", readiness=true. Elapsed: 28.146986535s
Feb 20 01:18:46.253: INFO: Pod "pod-subpath-test-configmap-rcw8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.153639941s
STEP: Saw pod success
Feb 20 01:18:46.253: INFO: Pod "pod-subpath-test-configmap-rcw8" satisfied condition "success or failure"
Feb 20 01:18:46.257: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-rcw8 container test-container-subpath-configmap-rcw8: 
STEP: delete the pod
Feb 20 01:18:46.309: INFO: Waiting for pod pod-subpath-test-configmap-rcw8 to disappear
Feb 20 01:18:46.325: INFO: Pod pod-subpath-test-configmap-rcw8 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-rcw8
Feb 20 01:18:46.325: INFO: Deleting pod "pod-subpath-test-configmap-rcw8" in namespace "subpath-9041"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:18:46.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9041" for this suite.

• [SLOW TEST:31.174 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":280,"completed":257,"skipped":4104,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:18:46.390: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-9c11aab8-bb3d-4708-9089-055904ad0e95
STEP: Creating a pod to test consume secrets
Feb 20 01:18:46.584: INFO: Waiting up to 5m0s for pod "pod-secrets-98a3dbf8-0103-4006-9026-524fb8a8fdf4" in namespace "secrets-5496" to be "success or failure"
Feb 20 01:18:46.609: INFO: Pod "pod-secrets-98a3dbf8-0103-4006-9026-524fb8a8fdf4": Phase="Pending", Reason="", readiness=false. Elapsed: 23.667794ms
Feb 20 01:18:48.626: INFO: Pod "pod-secrets-98a3dbf8-0103-4006-9026-524fb8a8fdf4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040815798s
Feb 20 01:18:50.631: INFO: Pod "pod-secrets-98a3dbf8-0103-4006-9026-524fb8a8fdf4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046308598s
Feb 20 01:18:52.638: INFO: Pod "pod-secrets-98a3dbf8-0103-4006-9026-524fb8a8fdf4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052865468s
Feb 20 01:18:54.644: INFO: Pod "pod-secrets-98a3dbf8-0103-4006-9026-524fb8a8fdf4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05867877s
STEP: Saw pod success
Feb 20 01:18:54.644: INFO: Pod "pod-secrets-98a3dbf8-0103-4006-9026-524fb8a8fdf4" satisfied condition "success or failure"
Feb 20 01:18:54.648: INFO: Trying to get logs from node jerma-node pod pod-secrets-98a3dbf8-0103-4006-9026-524fb8a8fdf4 container secret-volume-test: 
STEP: delete the pod
Feb 20 01:18:55.303: INFO: Waiting for pod pod-secrets-98a3dbf8-0103-4006-9026-524fb8a8fdf4 to disappear
Feb 20 01:18:55.372: INFO: Pod pod-secrets-98a3dbf8-0103-4006-9026-524fb8a8fdf4 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:18:55.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5496" for this suite.

• [SLOW TEST:8.998 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":258,"skipped":4154,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:18:55.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 20 01:18:55.581: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-ae4074a9-6e29-48af-b807-d2c47cfae2c2" in namespace "security-context-test-2356" to be "success or failure"
Feb 20 01:18:55.623: INFO: Pod "busybox-readonly-false-ae4074a9-6e29-48af-b807-d2c47cfae2c2": Phase="Pending", Reason="", readiness=false. Elapsed: 41.132936ms
Feb 20 01:18:57.631: INFO: Pod "busybox-readonly-false-ae4074a9-6e29-48af-b807-d2c47cfae2c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049855748s
Feb 20 01:18:59.637: INFO: Pod "busybox-readonly-false-ae4074a9-6e29-48af-b807-d2c47cfae2c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055930262s
Feb 20 01:19:01.643: INFO: Pod "busybox-readonly-false-ae4074a9-6e29-48af-b807-d2c47cfae2c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061900897s
Feb 20 01:19:03.653: INFO: Pod "busybox-readonly-false-ae4074a9-6e29-48af-b807-d2c47cfae2c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.071397481s
Feb 20 01:19:03.653: INFO: Pod "busybox-readonly-false-ae4074a9-6e29-48af-b807-d2c47cfae2c2" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:19:03.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2356" for this suite.

• [SLOW TEST:8.286 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a pod with readOnlyRootFilesystem
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":280,"completed":259,"skipped":4164,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:19:03.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Feb 20 01:19:03.785: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 20 01:19:03.800: INFO: Waiting for terminating namespaces to be deleted...
Feb 20 01:19:03.803: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 20 01:19:03.818: INFO: busybox-readonly-false-ae4074a9-6e29-48af-b807-d2c47cfae2c2 from security-context-test-2356 started at 2020-02-20 01:18:55 +0000 UTC (1 container statuses recorded)
Feb 20 01:19:03.818: INFO: 	Container busybox-readonly-false-ae4074a9-6e29-48af-b807-d2c47cfae2c2 ready: false, restart count 0
Feb 20 01:19:03.818: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 20 01:19:03.818: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 20 01:19:03.818: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 20 01:19:03.818: INFO: 	Container weave ready: true, restart count 1
Feb 20 01:19:03.818: INFO: 	Container weave-npc ready: true, restart count 0
Feb 20 01:19:03.818: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 20 01:19:03.838: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 20 01:19:03.838: INFO: 	Container kube-scheduler ready: true, restart count 18
Feb 20 01:19:03.838: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 20 01:19:03.838: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 20 01:19:03.838: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 20 01:19:03.838: INFO: 	Container etcd ready: true, restart count 1
Feb 20 01:19:03.838: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 20 01:19:03.838: INFO: 	Container coredns ready: true, restart count 0
Feb 20 01:19:03.838: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 20 01:19:03.838: INFO: 	Container coredns ready: true, restart count 0
Feb 20 01:19:03.838: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 20 01:19:03.838: INFO: 	Container kube-controller-manager ready: true, restart count 14
Feb 20 01:19:03.838: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 20 01:19:03.838: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 20 01:19:03.838: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 20 01:19:03.838: INFO: 	Container weave ready: true, restart count 0
Feb 20 01:19:03.838: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: verifying the node has the label node jerma-node
STEP: verifying the node has the label node jerma-server-mvvl6gufaqub
Feb 20 01:19:04.065: INFO: Pod coredns-6955765f44-bhnn4 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Feb 20 01:19:04.065: INFO: Pod coredns-6955765f44-bwd85 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Feb 20 01:19:04.065: INFO: Pod etcd-jerma-server-mvvl6gufaqub requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Feb 20 01:19:04.065: INFO: Pod kube-apiserver-jerma-server-mvvl6gufaqub requesting resource cpu=250m on Node jerma-server-mvvl6gufaqub
Feb 20 01:19:04.065: INFO: Pod kube-controller-manager-jerma-server-mvvl6gufaqub requesting resource cpu=200m on Node jerma-server-mvvl6gufaqub
Feb 20 01:19:04.065: INFO: Pod kube-proxy-chkps requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Feb 20 01:19:04.065: INFO: Pod kube-proxy-dsf66 requesting resource cpu=0m on Node jerma-node
Feb 20 01:19:04.065: INFO: Pod kube-scheduler-jerma-server-mvvl6gufaqub requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Feb 20 01:19:04.065: INFO: Pod weave-net-kz8lv requesting resource cpu=20m on Node jerma-node
Feb 20 01:19:04.065: INFO: Pod weave-net-z6tjf requesting resource cpu=20m on Node jerma-server-mvvl6gufaqub
STEP: Starting Pods to consume most of the cluster CPU.
Feb 20 01:19:04.065: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node
Feb 20 01:19:04.073: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-mvvl6gufaqub
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-7180b435-33ed-4452-a47e-05bb106dda4e.15f4f78ab029bd5c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5132/filler-pod-7180b435-33ed-4452-a47e-05bb106dda4e to jerma-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-7180b435-33ed-4452-a47e-05bb106dda4e.15f4f78bef0ef042], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-7180b435-33ed-4452-a47e-05bb106dda4e.15f4f78cd1e250e0], Reason = [Created], Message = [Created container filler-pod-7180b435-33ed-4452-a47e-05bb106dda4e]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-7180b435-33ed-4452-a47e-05bb106dda4e.15f4f78cf67404a0], Reason = [Started], Message = [Started container filler-pod-7180b435-33ed-4452-a47e-05bb106dda4e]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a6d41865-acef-4918-b4a4-cb5e74af7b6b.15f4f78ab4301e8e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5132/filler-pod-a6d41865-acef-4918-b4a4-cb5e74af7b6b to jerma-server-mvvl6gufaqub]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a6d41865-acef-4918-b4a4-cb5e74af7b6b.15f4f78c0575e00f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a6d41865-acef-4918-b4a4-cb5e74af7b6b.15f4f78d0aff8a0a], Reason = [Created], Message = [Created container filler-pod-a6d41865-acef-4918-b4a4-cb5e74af7b6b]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a6d41865-acef-4918-b4a4-cb5e74af7b6b.15f4f78d22b76b55], Reason = [Started], Message = [Started container filler-pod-a6d41865-acef-4918-b4a4-cb5e74af7b6b]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f4f78d83e765c3], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f4f78d86fa0ca3], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-server-mvvl6gufaqub
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-node
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:19:17.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5132" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:13.786 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":280,"completed":260,"skipped":4178,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:19:17.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 20 01:19:36.707: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 20 01:19:36.731: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 20 01:19:38.732: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 20 01:19:38.742: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 20 01:19:40.731: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 20 01:19:40.739: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 20 01:19:42.731: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 20 01:19:42.736: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 20 01:19:44.732: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 20 01:19:44.746: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 20 01:19:46.731: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 20 01:19:46.738: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 20 01:19:48.731: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 20 01:19:48.738: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 20 01:19:50.731: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 20 01:19:50.738: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 20 01:19:52.732: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 20 01:19:52.741: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:19:52.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8101" for this suite.

• [SLOW TEST:35.291 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":280,"completed":261,"skipped":4185,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:19:52.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 20 01:19:53.493: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 20 01:19:55.510: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717758393, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717758393, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717758393, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717758393, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 01:19:57.517: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717758393, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717758393, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717758393, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717758393, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 20 01:19:59.961: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717758393, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717758393, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717758393, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717758393, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 20 01:20:02.708: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:20:02.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4632" for this suite.
STEP: Destroying namespace "webhook-4632-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:10.374 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":280,"completed":262,"skipped":4205,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:20:03.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 20 01:20:03.311: INFO: Create a RollingUpdate DaemonSet
Feb 20 01:20:03.317: INFO: Check that daemon pods launch on every node of the cluster
Feb 20 01:20:03.397: INFO: Number of nodes with available pods: 0
Feb 20 01:20:03.397: INFO: Node jerma-node is running more than one daemon pod
Feb 20 01:20:05.443: INFO: Number of nodes with available pods: 0
Feb 20 01:20:05.443: INFO: Node jerma-node is running more than one daemon pod
Feb 20 01:20:06.411: INFO: Number of nodes with available pods: 0
Feb 20 01:20:06.411: INFO: Node jerma-node is running more than one daemon pod
Feb 20 01:20:07.410: INFO: Number of nodes with available pods: 0
Feb 20 01:20:07.410: INFO: Node jerma-node is running more than one daemon pod
Feb 20 01:20:10.837: INFO: Number of nodes with available pods: 0
Feb 20 01:20:10.837: INFO: Node jerma-node is running more than one daemon pod
Feb 20 01:20:11.618: INFO: Number of nodes with available pods: 0
Feb 20 01:20:11.619: INFO: Node jerma-node is running more than one daemon pod
Feb 20 01:20:12.408: INFO: Number of nodes with available pods: 0
Feb 20 01:20:12.408: INFO: Node jerma-node is running more than one daemon pod
Feb 20 01:20:13.413: INFO: Number of nodes with available pods: 0
Feb 20 01:20:13.413: INFO: Node jerma-node is running more than one daemon pod
Feb 20 01:20:14.415: INFO: Number of nodes with available pods: 1
Feb 20 01:20:14.415: INFO: Node jerma-node is running more than one daemon pod
Feb 20 01:20:15.413: INFO: Number of nodes with available pods: 2
Feb 20 01:20:15.413: INFO: Number of running nodes: 2, number of available pods: 2
Feb 20 01:20:15.413: INFO: Update the DaemonSet to trigger a rollout
Feb 20 01:20:15.424: INFO: Updating DaemonSet daemon-set
Feb 20 01:20:23.459: INFO: Roll back the DaemonSet before rollout is complete
Feb 20 01:20:23.468: INFO: Updating DaemonSet daemon-set
Feb 20 01:20:23.469: INFO: Make sure DaemonSet rollback is complete
Feb 20 01:20:23.479: INFO: Wrong image for pod: daemon-set-qznbf. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 20 01:20:23.479: INFO: Pod daemon-set-qznbf is not available
Feb 20 01:20:25.266: INFO: Wrong image for pod: daemon-set-qznbf. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 20 01:20:25.266: INFO: Pod daemon-set-qznbf is not available
Feb 20 01:20:26.127: INFO: Pod daemon-set-n7zp9 is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2509, will wait for the garbage collector to delete the pods
Feb 20 01:20:26.203: INFO: Deleting DaemonSet.extensions daemon-set took: 10.627157ms
Feb 20 01:20:27.104: INFO: Terminating DaemonSet.extensions daemon-set pods took: 900.7995ms
Feb 20 01:20:42.410: INFO: Number of nodes with available pods: 0
Feb 20 01:20:42.410: INFO: Number of running nodes: 0, number of available pods: 0
Feb 20 01:20:42.415: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2509/daemonsets","resourceVersion":"9517284"},"items":null}

Feb 20 01:20:42.418: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2509/pods","resourceVersion":"9517284"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:20:42.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2509" for this suite.

• [SLOW TEST:39.325 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":280,"completed":263,"skipped":4210,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:20:42.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-9264
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating stateful set ss in namespace statefulset-9264
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9264
Feb 20 01:20:42.632: INFO: Found 0 stateful pods, waiting for 1
Feb 20 01:20:52.640: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb 20 01:20:52.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 20 01:20:53.126: INFO: stderr: "I0220 01:20:52.822693    3807 log.go:172] (0xc000a41600) (0xc000b585a0) Create stream\nI0220 01:20:52.822932    3807 log.go:172] (0xc000a41600) (0xc000b585a0) Stream added, broadcasting: 1\nI0220 01:20:52.837131    3807 log.go:172] (0xc000a41600) Reply frame received for 1\nI0220 01:20:52.837269    3807 log.go:172] (0xc000a41600) (0xc000b58000) Create stream\nI0220 01:20:52.837308    3807 log.go:172] (0xc000a41600) (0xc000b58000) Stream added, broadcasting: 3\nI0220 01:20:52.843640    3807 log.go:172] (0xc000a41600) Reply frame received for 3\nI0220 01:20:52.843714    3807 log.go:172] (0xc000a41600) (0xc000684640) Create stream\nI0220 01:20:52.844119    3807 log.go:172] (0xc000a41600) (0xc000684640) Stream added, broadcasting: 5\nI0220 01:20:52.851915    3807 log.go:172] (0xc000a41600) Reply frame received for 5\nI0220 01:20:52.956275    3807 log.go:172] (0xc000a41600) Data frame received for 5\nI0220 01:20:52.956316    3807 log.go:172] (0xc000684640) (5) Data frame handling\nI0220 01:20:52.956335    3807 log.go:172] (0xc000684640) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0220 01:20:53.020888    3807 log.go:172] (0xc000a41600) Data frame received for 3\nI0220 01:20:53.020906    3807 log.go:172] (0xc000b58000) (3) Data frame handling\nI0220 01:20:53.020929    3807 log.go:172] (0xc000b58000) (3) Data frame sent\nI0220 01:20:53.115266    3807 log.go:172] (0xc000a41600) Data frame received for 1\nI0220 01:20:53.115395    3807 log.go:172] (0xc000a41600) (0xc000b58000) Stream removed, broadcasting: 3\nI0220 01:20:53.115490    3807 log.go:172] (0xc000b585a0) (1) Data frame handling\nI0220 01:20:53.115534    3807 log.go:172] (0xc000b585a0) (1) Data frame sent\nI0220 01:20:53.115615    3807 log.go:172] (0xc000a41600) (0xc000684640) Stream removed, broadcasting: 5\nI0220 01:20:53.115786    3807 log.go:172] (0xc000a41600) (0xc000b585a0) Stream removed, broadcasting: 1\nI0220 01:20:53.115814    3807 log.go:172] (0xc000a41600) Go away received\nI0220 01:20:53.117027    3807 log.go:172] (0xc000a41600) (0xc000b585a0) Stream removed, broadcasting: 1\nI0220 01:20:53.117051    3807 log.go:172] (0xc000a41600) (0xc000b58000) Stream removed, broadcasting: 3\nI0220 01:20:53.117063    3807 log.go:172] (0xc000a41600) (0xc000684640) Stream removed, broadcasting: 5\n"
Feb 20 01:20:53.126: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 20 01:20:53.127: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 20 01:20:53.131: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 20 01:21:03.139: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 20 01:21:03.139: INFO: Waiting for statefulset status.replicas updated to 0
Feb 20 01:21:03.201: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 20 01:21:03.201: INFO: ss-0  jerma-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:20:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:20:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:20:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:20:42 +0000 UTC  }]
Feb 20 01:21:03.201: INFO: 
Feb 20 01:21:03.201: INFO: StatefulSet ss has not reached scale 3, at 1
Feb 20 01:21:04.746: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.946217455s
Feb 20 01:21:05.757: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.401777475s
Feb 20 01:21:06.895: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.39081797s
Feb 20 01:21:07.904: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.252592788s
Feb 20 01:21:09.804: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.24372119s
Feb 20 01:21:10.978: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.342952465s
Feb 20 01:21:12.440: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.169948609s
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9264
Feb 20 01:21:13.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:21:13.967: INFO: stderr: "I0220 01:21:13.721580    3827 log.go:172] (0xc000a36000) (0xc000447540) Create stream\nI0220 01:21:13.721829    3827 log.go:172] (0xc000a36000) (0xc000447540) Stream added, broadcasting: 1\nI0220 01:21:13.724491    3827 log.go:172] (0xc000a36000) Reply frame received for 1\nI0220 01:21:13.724520    3827 log.go:172] (0xc000a36000) (0xc000988000) Create stream\nI0220 01:21:13.724526    3827 log.go:172] (0xc000a36000) (0xc000988000) Stream added, broadcasting: 3\nI0220 01:21:13.727294    3827 log.go:172] (0xc000a36000) Reply frame received for 3\nI0220 01:21:13.727330    3827 log.go:172] (0xc000a36000) (0xc0006b1c20) Create stream\nI0220 01:21:13.727352    3827 log.go:172] (0xc000a36000) (0xc0006b1c20) Stream added, broadcasting: 5\nI0220 01:21:13.728387    3827 log.go:172] (0xc000a36000) Reply frame received for 5\nI0220 01:21:13.827193    3827 log.go:172] (0xc000a36000) Data frame received for 5\nI0220 01:21:13.827264    3827 log.go:172] (0xc0006b1c20) (5) Data frame handling\nI0220 01:21:13.827284    3827 log.go:172] (0xc0006b1c20) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0220 01:21:13.827324    3827 log.go:172] (0xc000a36000) Data frame received for 3\nI0220 01:21:13.827341    3827 log.go:172] (0xc000988000) (3) Data frame handling\nI0220 01:21:13.827352    3827 log.go:172] (0xc000988000) (3) Data frame sent\nI0220 01:21:13.947956    3827 log.go:172] (0xc000a36000) Data frame received for 1\nI0220 01:21:13.948175    3827 log.go:172] (0xc000a36000) (0xc000988000) Stream removed, broadcasting: 3\nI0220 01:21:13.948234    3827 log.go:172] (0xc000447540) (1) Data frame handling\nI0220 01:21:13.948556    3827 log.go:172] (0xc000447540) (1) Data frame sent\nI0220 01:21:13.948572    3827 log.go:172] (0xc000a36000) (0xc0006b1c20) Stream removed, broadcasting: 5\nI0220 01:21:13.948626    3827 log.go:172] (0xc000a36000) (0xc000447540) Stream removed, broadcasting: 1\nI0220 01:21:13.948665    3827 log.go:172] (0xc000a36000) Go away received\nI0220 01:21:13.949607    3827 log.go:172] (0xc000a36000) (0xc000447540) Stream removed, broadcasting: 1\nI0220 01:21:13.949675    3827 log.go:172] (0xc000a36000) (0xc000988000) Stream removed, broadcasting: 3\nI0220 01:21:13.949688    3827 log.go:172] (0xc000a36000) (0xc0006b1c20) Stream removed, broadcasting: 5\n"
Feb 20 01:21:13.967: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 20 01:21:13.967: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 20 01:21:13.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:21:14.331: INFO: stderr: "I0220 01:21:14.127879    3847 log.go:172] (0xc000965600) (0xc00095a640) Create stream\nI0220 01:21:14.128123    3847 log.go:172] (0xc000965600) (0xc00095a640) Stream added, broadcasting: 1\nI0220 01:21:14.134451    3847 log.go:172] (0xc000965600) Reply frame received for 1\nI0220 01:21:14.134585    3847 log.go:172] (0xc000965600) (0xc000aa2460) Create stream\nI0220 01:21:14.134601    3847 log.go:172] (0xc000965600) (0xc000aa2460) Stream added, broadcasting: 3\nI0220 01:21:14.136044    3847 log.go:172] (0xc000965600) Reply frame received for 3\nI0220 01:21:14.136080    3847 log.go:172] (0xc000965600) (0xc000ad4140) Create stream\nI0220 01:21:14.136091    3847 log.go:172] (0xc000965600) (0xc000ad4140) Stream added, broadcasting: 5\nI0220 01:21:14.137393    3847 log.go:172] (0xc000965600) Reply frame received for 5\nI0220 01:21:14.198705    3847 log.go:172] (0xc000965600) Data frame received for 5\nI0220 01:21:14.198850    3847 log.go:172] (0xc000ad4140) (5) Data frame handling\nI0220 01:21:14.198892    3847 log.go:172] (0xc000ad4140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0220 01:21:14.218582    3847 log.go:172] (0xc000965600) Data frame received for 5\nI0220 01:21:14.218600    3847 log.go:172] (0xc000ad4140) (5) Data frame handling\nI0220 01:21:14.218608    3847 log.go:172] (0xc000ad4140) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0220 01:21:14.218734    3847 log.go:172] (0xc000965600) Data frame received for 5\nI0220 01:21:14.218802    3847 log.go:172] (0xc000ad4140) (5) Data frame handling\nI0220 01:21:14.218814    3847 log.go:172] (0xc000ad4140) (5) Data frame sent\n+ true\nI0220 01:21:14.218823    3847 log.go:172] (0xc000965600) Data frame received for 3\nI0220 01:21:14.218831    3847 log.go:172] (0xc000aa2460) (3) Data frame handling\nI0220 01:21:14.218840    3847 log.go:172] (0xc000aa2460) (3) Data frame sent\nI0220 01:21:14.318764    3847 log.go:172] (0xc000965600) (0xc000ad4140) Stream removed, broadcasting: 5\nI0220 01:21:14.318893    3847 log.go:172] (0xc000965600) (0xc000aa2460) Stream removed, broadcasting: 3\nI0220 01:21:14.319010    3847 log.go:172] (0xc000965600) Data frame received for 1\nI0220 01:21:14.319074    3847 log.go:172] (0xc00095a640) (1) Data frame handling\nI0220 01:21:14.319104    3847 log.go:172] (0xc00095a640) (1) Data frame sent\nI0220 01:21:14.319351    3847 log.go:172] (0xc000965600) (0xc00095a640) Stream removed, broadcasting: 1\nI0220 01:21:14.319418    3847 log.go:172] (0xc000965600) Go away received\nI0220 01:21:14.320530    3847 log.go:172] (0xc000965600) (0xc00095a640) Stream removed, broadcasting: 1\nI0220 01:21:14.320588    3847 log.go:172] (0xc000965600) (0xc000aa2460) Stream removed, broadcasting: 3\nI0220 01:21:14.320598    3847 log.go:172] (0xc000965600) (0xc000ad4140) Stream removed, broadcasting: 5\n"
Feb 20 01:21:14.332: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 20 01:21:14.332: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 20 01:21:14.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:21:14.772: INFO: stderr: "I0220 01:21:14.528558    3865 log.go:172] (0xc000a188f0) (0xc000683e00) Create stream\nI0220 01:21:14.528896    3865 log.go:172] (0xc000a188f0) (0xc000683e00) Stream added, broadcasting: 1\nI0220 01:21:14.537184    3865 log.go:172] (0xc000a188f0) Reply frame received for 1\nI0220 01:21:14.537289    3865 log.go:172] (0xc000a188f0) (0xc0005c6820) Create stream\nI0220 01:21:14.537304    3865 log.go:172] (0xc000a188f0) (0xc0005c6820) Stream added, broadcasting: 3\nI0220 01:21:14.538706    3865 log.go:172] (0xc000a188f0) Reply frame received for 3\nI0220 01:21:14.538761    3865 log.go:172] (0xc000a188f0) (0xc000683ea0) Create stream\nI0220 01:21:14.538805    3865 log.go:172] (0xc000a188f0) (0xc000683ea0) Stream added, broadcasting: 5\nI0220 01:21:14.540289    3865 log.go:172] (0xc000a188f0) Reply frame received for 5\nI0220 01:21:14.641854    3865 log.go:172] (0xc000a188f0) Data frame received for 3\nI0220 01:21:14.641960    3865 log.go:172] (0xc0005c6820) (3) Data frame handling\nI0220 01:21:14.642005    3865 log.go:172] (0xc0005c6820) (3) Data frame sent\nI0220 01:21:14.642173    3865 log.go:172] (0xc000a188f0) Data frame received for 5\nI0220 01:21:14.642219    3865 log.go:172] (0xc000683ea0) (5) Data frame handling\nI0220 01:21:14.642242    3865 log.go:172] (0xc000683ea0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0220 01:21:14.754894    3865 log.go:172] (0xc000a188f0) (0xc0005c6820) Stream removed, broadcasting: 3\nI0220 01:21:14.755163    3865 log.go:172] (0xc000a188f0) Data frame received for 1\nI0220 01:21:14.755187    3865 log.go:172] (0xc000683e00) (1) Data frame handling\nI0220 01:21:14.755219    3865 log.go:172] (0xc000683e00) (1) Data frame sent\nI0220 01:21:14.755230    3865 log.go:172] (0xc000a188f0) (0xc000683e00) Stream removed, broadcasting: 1\nI0220 01:21:14.755567    3865 log.go:172] (0xc000a188f0) (0xc000683ea0) Stream removed, broadcasting: 5\nI0220 01:21:14.755787    3865 log.go:172] (0xc000a188f0) Go away received\nI0220 01:21:14.757014    3865 log.go:172] (0xc000a188f0) (0xc000683e00) Stream removed, broadcasting: 1\nI0220 01:21:14.757046    3865 log.go:172] (0xc000a188f0) (0xc0005c6820) Stream removed, broadcasting: 3\nI0220 01:21:14.757057    3865 log.go:172] (0xc000a188f0) (0xc000683ea0) Stream removed, broadcasting: 5\n"
Feb 20 01:21:14.773: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 20 01:21:14.773: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 20 01:21:14.780: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Feb 20 01:21:24.785: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 20 01:21:24.785: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 20 01:21:24.785: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb 20 01:21:24.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 20 01:21:25.110: INFO: stderr: "I0220 01:21:24.947550    3886 log.go:172] (0xc000bd71e0) (0xc000bd05a0) Create stream\nI0220 01:21:24.947605    3886 log.go:172] (0xc000bd71e0) (0xc000bd05a0) Stream added, broadcasting: 1\nI0220 01:21:24.950687    3886 log.go:172] (0xc000bd71e0) Reply frame received for 1\nI0220 01:21:24.950724    3886 log.go:172] (0xc000bd71e0) (0xc000a50320) Create stream\nI0220 01:21:24.950736    3886 log.go:172] (0xc000bd71e0) (0xc000a50320) Stream added, broadcasting: 3\nI0220 01:21:24.952145    3886 log.go:172] (0xc000bd71e0) Reply frame received for 3\nI0220 01:21:24.952172    3886 log.go:172] (0xc000bd71e0) (0xc000bd0640) Create stream\nI0220 01:21:24.952180    3886 log.go:172] (0xc000bd71e0) (0xc000bd0640) Stream added, broadcasting: 5\nI0220 01:21:24.953624    3886 log.go:172] (0xc000bd71e0) Reply frame received for 5\nI0220 01:21:25.024311    3886 log.go:172] (0xc000bd71e0) Data frame received for 5\nI0220 01:21:25.024388    3886 log.go:172] (0xc000bd0640) (5) Data frame handling\nI0220 01:21:25.024495    3886 log.go:172] (0xc000bd0640) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0220 01:21:25.024573    3886 log.go:172] (0xc000bd71e0) Data frame received for 3\nI0220 01:21:25.024579    3886 log.go:172] (0xc000a50320) (3) Data frame handling\nI0220 01:21:25.024597    3886 log.go:172] (0xc000a50320) (3) Data frame sent\nI0220 01:21:25.098137    3886 log.go:172] (0xc000bd71e0) Data frame received for 1\nI0220 01:21:25.098186    3886 log.go:172] (0xc000bd71e0) (0xc000a50320) Stream removed, broadcasting: 3\nI0220 01:21:25.098268    3886 log.go:172] (0xc000bd05a0) (1) Data frame handling\nI0220 01:21:25.098284    3886 log.go:172] (0xc000bd05a0) (1) Data frame sent\nI0220 01:21:25.098295    3886 log.go:172] (0xc000bd71e0) (0xc000bd05a0) Stream removed, broadcasting: 1\nI0220 01:21:25.100119    3886 log.go:172] (0xc000bd71e0) (0xc000bd0640) Stream removed, broadcasting: 5\nI0220 01:21:25.100371    3886 log.go:172] (0xc000bd71e0) (0xc000bd05a0) Stream removed, broadcasting: 1\nI0220 01:21:25.100745    3886 log.go:172] (0xc000bd71e0) (0xc000a50320) Stream removed, broadcasting: 3\nI0220 01:21:25.100820    3886 log.go:172] (0xc000bd71e0) (0xc000bd0640) Stream removed, broadcasting: 5\nI0220 01:21:25.101094    3886 log.go:172] (0xc000bd71e0) Go away received\n"
Feb 20 01:21:25.110: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 20 01:21:25.110: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 20 01:21:25.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 20 01:21:25.512: INFO: stderr: "I0220 01:21:25.294342    3906 log.go:172] (0xc000111080) (0xc000687f40) Create stream\nI0220 01:21:25.294574    3906 log.go:172] (0xc000111080) (0xc000687f40) Stream added, broadcasting: 1\nI0220 01:21:25.297395    3906 log.go:172] (0xc000111080) Reply frame received for 1\nI0220 01:21:25.297439    3906 log.go:172] (0xc000111080) (0xc0004274a0) Create stream\nI0220 01:21:25.297447    3906 log.go:172] (0xc000111080) (0xc0004274a0) Stream added, broadcasting: 3\nI0220 01:21:25.298388    3906 log.go:172] (0xc000111080) Reply frame received for 3\nI0220 01:21:25.298411    3906 log.go:172] (0xc000111080) (0xc00091c000) Create stream\nI0220 01:21:25.298418    3906 log.go:172] (0xc000111080) (0xc00091c000) Stream added, broadcasting: 5\nI0220 01:21:25.299811    3906 log.go:172] (0xc000111080) Reply frame received for 5\nI0220 01:21:25.365330    3906 log.go:172] (0xc000111080) Data frame received for 5\nI0220 01:21:25.365394    3906 log.go:172] (0xc00091c000) (5) Data frame handling\nI0220 01:21:25.365424    3906 log.go:172] (0xc00091c000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0220 01:21:25.398655    3906 log.go:172] (0xc000111080) Data frame received for 3\nI0220 01:21:25.398688    3906 log.go:172] (0xc0004274a0) (3) Data frame handling\nI0220 01:21:25.398712    3906 log.go:172] (0xc0004274a0) (3) Data frame sent\nI0220 01:21:25.499297    3906 log.go:172] (0xc000111080) (0xc0004274a0) Stream removed, broadcasting: 3\nI0220 01:21:25.499508    3906 log.go:172] (0xc000111080) Data frame received for 1\nI0220 01:21:25.499605    3906 log.go:172] (0xc000111080) (0xc00091c000) Stream removed, broadcasting: 5\nI0220 01:21:25.499693    3906 log.go:172] (0xc000687f40) (1) Data frame handling\nI0220 01:21:25.499731    3906 log.go:172] (0xc000687f40) (1) Data frame sent\nI0220 01:21:25.499784    3906 log.go:172] (0xc000111080) (0xc000687f40) Stream removed, broadcasting: 1\nI0220 01:21:25.499863    3906 log.go:172] (0xc000111080) Go away received\nI0220 01:21:25.501182    3906 log.go:172] (0xc000111080) (0xc000687f40) Stream removed, broadcasting: 1\nI0220 01:21:25.501206    3906 log.go:172] (0xc000111080) (0xc0004274a0) Stream removed, broadcasting: 3\nI0220 01:21:25.501214    3906 log.go:172] (0xc000111080) (0xc00091c000) Stream removed, broadcasting: 5\n"
Feb 20 01:21:25.512: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 20 01:21:25.512: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 20 01:21:25.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 20 01:21:25.796: INFO: stderr: "I0220 01:21:25.629212    3927 log.go:172] (0xc0000f8b00) (0xc0006a5c20) Create stream\nI0220 01:21:25.629310    3927 log.go:172] (0xc0000f8b00) (0xc0006a5c20) Stream added, broadcasting: 1\nI0220 01:21:25.631816    3927 log.go:172] (0xc0000f8b00) Reply frame received for 1\nI0220 01:21:25.631927    3927 log.go:172] (0xc0000f8b00) (0xc00064c000) Create stream\nI0220 01:21:25.631943    3927 log.go:172] (0xc0000f8b00) (0xc00064c000) Stream added, broadcasting: 3\nI0220 01:21:25.633406    3927 log.go:172] (0xc0000f8b00) Reply frame received for 3\nI0220 01:21:25.633425    3927 log.go:172] (0xc0000f8b00) (0xc00064c0a0) Create stream\nI0220 01:21:25.633430    3927 log.go:172] (0xc0000f8b00) (0xc00064c0a0) Stream added, broadcasting: 5\nI0220 01:21:25.634678    3927 log.go:172] (0xc0000f8b00) Reply frame received for 5\nI0220 01:21:25.690411    3927 log.go:172] (0xc0000f8b00) Data frame received for 5\nI0220 01:21:25.690462    3927 log.go:172] (0xc00064c0a0) (5) Data frame handling\nI0220 01:21:25.690484    3927 log.go:172] (0xc00064c0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0220 01:21:25.719295    3927 log.go:172] (0xc0000f8b00) Data frame received for 3\nI0220 01:21:25.719317    3927 log.go:172] (0xc00064c000) (3) Data frame handling\nI0220 01:21:25.719330    3927 log.go:172] (0xc00064c000) (3) Data frame sent\nI0220 01:21:25.786421    3927 log.go:172] (0xc0000f8b00) Data frame received for 1\nI0220 01:21:25.786762    3927 log.go:172] (0xc0006a5c20) (1) Data frame handling\nI0220 01:21:25.786799    3927 log.go:172] (0xc0006a5c20) (1) Data frame sent\nI0220 01:21:25.787363    3927 log.go:172] (0xc0000f8b00) (0xc0006a5c20) Stream removed, broadcasting: 1\nI0220 01:21:25.788105    3927 log.go:172] (0xc0000f8b00) (0xc00064c000) Stream removed, broadcasting: 3\nI0220 01:21:25.788127    3927 log.go:172] (0xc0000f8b00) (0xc00064c0a0) Stream removed, broadcasting: 5\nI0220 01:21:25.788139    3927 log.go:172] (0xc0000f8b00) Go away received\nI0220 01:21:25.788394    3927 log.go:172] (0xc0000f8b00) (0xc0006a5c20) Stream removed, broadcasting: 1\nI0220 01:21:25.788441    3927 log.go:172] (0xc0000f8b00) (0xc00064c000) Stream removed, broadcasting: 3\nI0220 01:21:25.788471    3927 log.go:172] (0xc0000f8b00) (0xc00064c0a0) Stream removed, broadcasting: 5\n"
Feb 20 01:21:25.797: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 20 01:21:25.797: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 20 01:21:25.797: INFO: Waiting for statefulset status.replicas updated to 0
Feb 20 01:21:25.804: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Feb 20 01:21:35.817: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 20 01:21:35.817: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 20 01:21:35.817: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 20 01:21:35.845: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 20 01:21:35.845: INFO: ss-0  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:20:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:20:42 +0000 UTC  }]
Feb 20 01:21:35.845: INFO: ss-1  jerma-server-mvvl6gufaqub  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  }]
Feb 20 01:21:35.845: INFO: ss-2  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  }]
Feb 20 01:21:35.845: INFO: 
Feb 20 01:21:35.845: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 20 01:21:37.699: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 20 01:21:37.699: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:20:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:20:42 +0000 UTC  }]
Feb 20 01:21:37.699: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  }]
Feb 20 01:21:37.699: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  }]
Feb 20 01:21:37.699: INFO: 
Feb 20 01:21:37.699: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 20 01:21:38.712: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 20 01:21:38.712: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:20:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:20:42 +0000 UTC  }]
Feb 20 01:21:38.713: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  }]
Feb 20 01:21:38.713: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  }]
Feb 20 01:21:38.713: INFO: 
Feb 20 01:21:38.713: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 20 01:21:39.725: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 20 01:21:39.725: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:20:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:20:42 +0000 UTC  }]
Feb 20 01:21:39.725: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  }]
Feb 20 01:21:39.725: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  }]
Feb 20 01:21:39.725: INFO: 
Feb 20 01:21:39.725: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 20 01:21:40.883: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 20 01:21:40.883: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:20:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:20:42 +0000 UTC  }]
Feb 20 01:21:40.883: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  }]
Feb 20 01:21:40.883: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  }]
Feb 20 01:21:40.883: INFO: 
Feb 20 01:21:40.883: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 20 01:21:41.894: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 20 01:21:41.894: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:20:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:20:42 +0000 UTC  }]
Feb 20 01:21:41.894: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  }]
Feb 20 01:21:41.894: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  }]
Feb 20 01:21:41.894: INFO: 
Feb 20 01:21:41.894: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 20 01:21:42.976: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 20 01:21:42.976: INFO: ss-0  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:20:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:20:42 +0000 UTC  }]
Feb 20 01:21:42.976: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  }]
Feb 20 01:21:42.977: INFO: ss-2  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  }]
Feb 20 01:21:42.977: INFO: 
Feb 20 01:21:42.977: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 20 01:21:43.984: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 20 01:21:43.984: INFO: ss-0  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:20:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:20:42 +0000 UTC  }]
Feb 20 01:21:43.984: INFO: ss-2  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  }]
Feb 20 01:21:43.984: INFO: 
Feb 20 01:21:43.984: INFO: StatefulSet ss has not reached scale 0, at 2
Feb 20 01:21:44.998: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 20 01:21:44.998: INFO: ss-0  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:20:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:20:42 +0000 UTC  }]
Feb 20 01:21:44.998: INFO: ss-2  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 01:21:03 +0000 UTC  }]
Feb 20 01:21:44.998: INFO: 
Feb 20 01:21:44.998: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9264
Feb 20 01:21:46.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:21:46.212: INFO: rc: 1
Feb 20 01:21:46.213: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Feb 20 01:21:56.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:21:56.381: INFO: rc: 1
Feb 20 01:21:56.381: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 01:22:06.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:22:06.610: INFO: rc: 1
Feb 20 01:22:06.611: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 01:22:16.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:22:16.785: INFO: rc: 1
Feb 20 01:22:16.786: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 01:22:26.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:22:26.964: INFO: rc: 1
Feb 20 01:22:26.965: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 01:22:36.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:22:37.137: INFO: rc: 1
Feb 20 01:22:37.137: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 01:22:47.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:22:47.347: INFO: rc: 1
Feb 20 01:22:47.347: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 01:22:57.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:22:57.509: INFO: rc: 1
Feb 20 01:22:57.509: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 01:23:07.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:23:07.724: INFO: rc: 1
Feb 20 01:23:07.724: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 01:23:17.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:23:17.883: INFO: rc: 1
Feb 20 01:23:17.884: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 01:23:27.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:23:28.047: INFO: rc: 1
Feb 20 01:23:28.047: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 01:23:38.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:23:38.190: INFO: rc: 1
Feb 20 01:23:38.190: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 01:23:48.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:23:48.322: INFO: rc: 1
Feb 20 01:23:48.323: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 01:23:58.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:23:58.557: INFO: rc: 1
Feb 20 01:23:58.557: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 01:24:08.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:24:08.736: INFO: rc: 1
Feb 20 01:24:08.736: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 01:24:18.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:24:18.886: INFO: rc: 1
Feb 20 01:24:18.887: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 01:24:28.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:24:29.050: INFO: rc: 1
Feb 20 01:24:29.050: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 01:24:39.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:24:39.244: INFO: rc: 1
Feb 20 01:24:39.244: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 01:24:49.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:24:49.409: INFO: rc: 1
Feb 20 01:24:49.409: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 01:24:59.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:24:59.582: INFO: rc: 1
Feb 20 01:24:59.582: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 01:25:09.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:25:09.727: INFO: rc: 1
Feb 20 01:25:09.728: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 01:25:19.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:25:19.912: INFO: rc: 1
Feb 20 01:25:19.912: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 01:25:29.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:25:30.088: INFO: rc: 1
Feb 20 01:25:30.088: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 01:25:40.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:25:40.265: INFO: rc: 1
Feb 20 01:25:40.266: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 01:25:50.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:25:50.459: INFO: rc: 1
Feb 20 01:25:50.460: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 01:26:00.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:26:00.668: INFO: rc: 1
Feb 20 01:26:00.669: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 01:26:10.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:26:10.947: INFO: rc: 1
Feb 20 01:26:10.948: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 01:26:20.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:26:21.150: INFO: rc: 1
Feb 20 01:26:21.150: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 01:26:31.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:26:31.356: INFO: rc: 1
Feb 20 01:26:31.356: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 01:26:41.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:26:41.624: INFO: rc: 1
Feb 20 01:26:41.624: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 20 01:26:51.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9264 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 20 01:26:51.870: INFO: rc: 1
Feb 20 01:26:51.870: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: 
Feb 20 01:26:51.870: INFO: Scaling statefulset ss to 0
Feb 20 01:26:51.888: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Feb 20 01:26:51.891: INFO: Deleting all statefulset in ns statefulset-9264
Feb 20 01:26:51.894: INFO: Scaling statefulset ss to 0
Feb 20 01:26:52.222: INFO: Waiting for statefulset status.replicas updated to 0
Feb 20 01:26:52.227: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:26:52.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9264" for this suite.

• [SLOW TEST:369.930 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":280,"completed":264,"skipped":4227,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:26:52.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 20 01:26:52.514: INFO: Waiting up to 5m0s for pod "busybox-user-65534-15c7ed0a-3454-4cab-9bcd-0cfec4ad52d8" in namespace "security-context-test-9506" to be "success or failure"
Feb 20 01:26:52.574: INFO: Pod "busybox-user-65534-15c7ed0a-3454-4cab-9bcd-0cfec4ad52d8": Phase="Pending", Reason="", readiness=false. Elapsed: 59.773117ms
Feb 20 01:26:54.581: INFO: Pod "busybox-user-65534-15c7ed0a-3454-4cab-9bcd-0cfec4ad52d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066863422s
Feb 20 01:26:56.594: INFO: Pod "busybox-user-65534-15c7ed0a-3454-4cab-9bcd-0cfec4ad52d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07979115s
Feb 20 01:26:58.604: INFO: Pod "busybox-user-65534-15c7ed0a-3454-4cab-9bcd-0cfec4ad52d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089252803s
Feb 20 01:27:00.615: INFO: Pod "busybox-user-65534-15c7ed0a-3454-4cab-9bcd-0cfec4ad52d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.100178554s
Feb 20 01:27:00.615: INFO: Pod "busybox-user-65534-15c7ed0a-3454-4cab-9bcd-0cfec4ad52d8" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:27:00.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9506" for this suite.

• [SLOW TEST:8.248 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":265,"skipped":4271,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:27:00.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: validating api versions
Feb 20 01:27:00.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb 20 01:27:01.140: INFO: stderr: ""
Feb 20 01:27:01.140: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:27:01.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7960" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":280,"completed":266,"skipped":4298,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:27:01.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:28:01.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4044" for this suite.

• [SLOW TEST:60.173 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":280,"completed":267,"skipped":4308,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:28:01.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Feb 20 01:28:01.439: INFO: Waiting up to 5m0s for pod "downward-api-a1d92339-37ff-457f-8840-f89e37dd09b0" in namespace "downward-api-3409" to be "success or failure"
Feb 20 01:28:01.445: INFO: Pod "downward-api-a1d92339-37ff-457f-8840-f89e37dd09b0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048207ms
Feb 20 01:28:03.451: INFO: Pod "downward-api-a1d92339-37ff-457f-8840-f89e37dd09b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012328547s
Feb 20 01:28:05.503: INFO: Pod "downward-api-a1d92339-37ff-457f-8840-f89e37dd09b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063629849s
Feb 20 01:28:07.627: INFO: Pod "downward-api-a1d92339-37ff-457f-8840-f89e37dd09b0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.188412233s
Feb 20 01:28:09.637: INFO: Pod "downward-api-a1d92339-37ff-457f-8840-f89e37dd09b0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.197634558s
Feb 20 01:28:11.647: INFO: Pod "downward-api-a1d92339-37ff-457f-8840-f89e37dd09b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.207549446s
STEP: Saw pod success
Feb 20 01:28:11.647: INFO: Pod "downward-api-a1d92339-37ff-457f-8840-f89e37dd09b0" satisfied condition "success or failure"
Feb 20 01:28:11.651: INFO: Trying to get logs from node jerma-node pod downward-api-a1d92339-37ff-457f-8840-f89e37dd09b0 container dapi-container: 
STEP: delete the pod
Feb 20 01:28:11.921: INFO: Waiting for pod downward-api-a1d92339-37ff-457f-8840-f89e37dd09b0 to disappear
Feb 20 01:28:11.935: INFO: Pod downward-api-a1d92339-37ff-457f-8840-f89e37dd09b0 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:28:11.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3409" for this suite.

• [SLOW TEST:10.628 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":280,"completed":268,"skipped":4316,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:28:11.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Feb 20 01:28:12.084: INFO: Waiting up to 5m0s for pod "downward-api-3481f422-7eb4-4c89-8fee-66b45eceffe4" in namespace "downward-api-2209" to be "success or failure"
Feb 20 01:28:12.122: INFO: Pod "downward-api-3481f422-7eb4-4c89-8fee-66b45eceffe4": Phase="Pending", Reason="", readiness=false. Elapsed: 37.489642ms
Feb 20 01:28:14.129: INFO: Pod "downward-api-3481f422-7eb4-4c89-8fee-66b45eceffe4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044174219s
Feb 20 01:28:16.155: INFO: Pod "downward-api-3481f422-7eb4-4c89-8fee-66b45eceffe4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070223982s
Feb 20 01:28:18.167: INFO: Pod "downward-api-3481f422-7eb4-4c89-8fee-66b45eceffe4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082648433s
Feb 20 01:28:20.178: INFO: Pod "downward-api-3481f422-7eb4-4c89-8fee-66b45eceffe4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093712542s
STEP: Saw pod success
Feb 20 01:28:20.179: INFO: Pod "downward-api-3481f422-7eb4-4c89-8fee-66b45eceffe4" satisfied condition "success or failure"
Feb 20 01:28:20.214: INFO: Trying to get logs from node jerma-node pod downward-api-3481f422-7eb4-4c89-8fee-66b45eceffe4 container dapi-container: 
STEP: delete the pod
Feb 20 01:28:20.241: INFO: Waiting for pod downward-api-3481f422-7eb4-4c89-8fee-66b45eceffe4 to disappear
Feb 20 01:28:20.247: INFO: Pod downward-api-3481f422-7eb4-4c89-8fee-66b45eceffe4 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:28:20.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2209" for this suite.

• [SLOW TEST:8.315 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":280,"completed":269,"skipped":4329,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:28:20.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-secret-qk42
STEP: Creating a pod to test atomic-volume-subpath
Feb 20 01:28:20.368: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-qk42" in namespace "subpath-3887" to be "success or failure"
Feb 20 01:28:20.374: INFO: Pod "pod-subpath-test-secret-qk42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.974643ms
Feb 20 01:28:22.380: INFO: Pod "pod-subpath-test-secret-qk42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011038887s
Feb 20 01:28:24.394: INFO: Pod "pod-subpath-test-secret-qk42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025364797s
Feb 20 01:28:26.403: INFO: Pod "pod-subpath-test-secret-qk42": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034389382s
Feb 20 01:28:28.421: INFO: Pod "pod-subpath-test-secret-qk42": Phase="Running", Reason="", readiness=true. Elapsed: 8.052294023s
Feb 20 01:28:30.426: INFO: Pod "pod-subpath-test-secret-qk42": Phase="Running", Reason="", readiness=true. Elapsed: 10.05767356s
Feb 20 01:28:32.433: INFO: Pod "pod-subpath-test-secret-qk42": Phase="Running", Reason="", readiness=true. Elapsed: 12.06420799s
Feb 20 01:28:34.447: INFO: Pod "pod-subpath-test-secret-qk42": Phase="Running", Reason="", readiness=true. Elapsed: 14.078012216s
Feb 20 01:28:36.462: INFO: Pod "pod-subpath-test-secret-qk42": Phase="Running", Reason="", readiness=true. Elapsed: 16.0936149s
Feb 20 01:28:38.526: INFO: Pod "pod-subpath-test-secret-qk42": Phase="Running", Reason="", readiness=true. Elapsed: 18.15739022s
Feb 20 01:28:40.536: INFO: Pod "pod-subpath-test-secret-qk42": Phase="Running", Reason="", readiness=true. Elapsed: 20.167294986s
Feb 20 01:28:42.564: INFO: Pod "pod-subpath-test-secret-qk42": Phase="Running", Reason="", readiness=true. Elapsed: 22.195092531s
Feb 20 01:28:44.574: INFO: Pod "pod-subpath-test-secret-qk42": Phase="Running", Reason="", readiness=true. Elapsed: 24.204926577s
Feb 20 01:28:46.583: INFO: Pod "pod-subpath-test-secret-qk42": Phase="Running", Reason="", readiness=true. Elapsed: 26.214349363s
Feb 20 01:28:48.591: INFO: Pod "pod-subpath-test-secret-qk42": Phase="Running", Reason="", readiness=true. Elapsed: 28.222397051s
Feb 20 01:28:50.600: INFO: Pod "pod-subpath-test-secret-qk42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.231575393s
STEP: Saw pod success
Feb 20 01:28:50.600: INFO: Pod "pod-subpath-test-secret-qk42" satisfied condition "success or failure"
Feb 20 01:28:50.611: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-qk42 container test-container-subpath-secret-qk42: 
STEP: delete the pod
Feb 20 01:28:50.677: INFO: Waiting for pod pod-subpath-test-secret-qk42 to disappear
Feb 20 01:28:50.684: INFO: Pod pod-subpath-test-secret-qk42 no longer exists
STEP: Deleting pod pod-subpath-test-secret-qk42
Feb 20 01:28:50.684: INFO: Deleting pod "pod-subpath-test-secret-qk42" in namespace "subpath-3887"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:28:50.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3887" for this suite.

• [SLOW TEST:30.432 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":280,"completed":270,"skipped":4329,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:28:50.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test env composition
Feb 20 01:28:50.824: INFO: Waiting up to 5m0s for pod "var-expansion-e08a83da-e49b-447c-b7e9-d711a5909d46" in namespace "var-expansion-1441" to be "success or failure"
Feb 20 01:28:50.854: INFO: Pod "var-expansion-e08a83da-e49b-447c-b7e9-d711a5909d46": Phase="Pending", Reason="", readiness=false. Elapsed: 30.078165ms
Feb 20 01:28:52.861: INFO: Pod "var-expansion-e08a83da-e49b-447c-b7e9-d711a5909d46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036977858s
Feb 20 01:28:54.895: INFO: Pod "var-expansion-e08a83da-e49b-447c-b7e9-d711a5909d46": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071053672s
Feb 20 01:28:56.901: INFO: Pod "var-expansion-e08a83da-e49b-447c-b7e9-d711a5909d46": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07713433s
Feb 20 01:28:58.908: INFO: Pod "var-expansion-e08a83da-e49b-447c-b7e9-d711a5909d46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.08429855s
STEP: Saw pod success
Feb 20 01:28:58.908: INFO: Pod "var-expansion-e08a83da-e49b-447c-b7e9-d711a5909d46" satisfied condition "success or failure"
Feb 20 01:28:58.912: INFO: Trying to get logs from node jerma-node pod var-expansion-e08a83da-e49b-447c-b7e9-d711a5909d46 container dapi-container: 
STEP: delete the pod
Feb 20 01:28:59.017: INFO: Waiting for pod var-expansion-e08a83da-e49b-447c-b7e9-d711a5909d46 to disappear
Feb 20 01:28:59.039: INFO: Pod var-expansion-e08a83da-e49b-447c-b7e9-d711a5909d46 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:28:59.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1441" for this suite.

• [SLOW TEST:8.368 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":280,"completed":271,"skipped":4337,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:28:59.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 20 01:28:59.201: INFO: Creating ReplicaSet my-hostname-basic-ea304acf-789f-4c69-87a8-12c24aee8579
Feb 20 01:28:59.227: INFO: Pod name my-hostname-basic-ea304acf-789f-4c69-87a8-12c24aee8579: Found 1 pods out of 1
Feb 20 01:28:59.227: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-ea304acf-789f-4c69-87a8-12c24aee8579" is running
Feb 20 01:29:05.315: INFO: Pod "my-hostname-basic-ea304acf-789f-4c69-87a8-12c24aee8579-rtphw" is running (conditions: [])
Feb 20 01:29:05.316: INFO: Trying to dial the pod
Feb 20 01:29:10.339: INFO: Controller my-hostname-basic-ea304acf-789f-4c69-87a8-12c24aee8579: Got expected result from replica 1 [my-hostname-basic-ea304acf-789f-4c69-87a8-12c24aee8579-rtphw]: "my-hostname-basic-ea304acf-789f-4c69-87a8-12c24aee8579-rtphw", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:29:10.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-812" for this suite.

• [SLOW TEST:11.286 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":280,"completed":272,"skipped":4340,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:29:10.362: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 20 01:29:10.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:29:18.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7064" for this suite.

• [SLOW TEST:8.441 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":280,"completed":273,"skipped":4351,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:29:18.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-map-dc190e0f-9e73-43fc-9569-60eeb9d5c5de
STEP: Creating a pod to test consume configMaps
Feb 20 01:29:19.027: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-03c1d575-d8aa-416e-8abb-09b52554a159" in namespace "projected-2739" to be "success or failure"
Feb 20 01:29:19.065: INFO: Pod "pod-projected-configmaps-03c1d575-d8aa-416e-8abb-09b52554a159": Phase="Pending", Reason="", readiness=false. Elapsed: 37.637664ms
Feb 20 01:29:21.073: INFO: Pod "pod-projected-configmaps-03c1d575-d8aa-416e-8abb-09b52554a159": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04640578s
Feb 20 01:29:23.078: INFO: Pod "pod-projected-configmaps-03c1d575-d8aa-416e-8abb-09b52554a159": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051345811s
Feb 20 01:29:25.365: INFO: Pod "pod-projected-configmaps-03c1d575-d8aa-416e-8abb-09b52554a159": Phase="Pending", Reason="", readiness=false. Elapsed: 6.338332316s
Feb 20 01:29:27.375: INFO: Pod "pod-projected-configmaps-03c1d575-d8aa-416e-8abb-09b52554a159": Phase="Pending", Reason="", readiness=false. Elapsed: 8.348032594s
Feb 20 01:29:29.382: INFO: Pod "pod-projected-configmaps-03c1d575-d8aa-416e-8abb-09b52554a159": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.354872792s
STEP: Saw pod success
Feb 20 01:29:29.382: INFO: Pod "pod-projected-configmaps-03c1d575-d8aa-416e-8abb-09b52554a159" satisfied condition "success or failure"
Feb 20 01:29:29.390: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-03c1d575-d8aa-416e-8abb-09b52554a159 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 20 01:29:29.458: INFO: Waiting for pod pod-projected-configmaps-03c1d575-d8aa-416e-8abb-09b52554a159 to disappear
Feb 20 01:29:29.465: INFO: Pod pod-projected-configmaps-03c1d575-d8aa-416e-8abb-09b52554a159 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:29:29.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2739" for this suite.

• [SLOW TEST:10.676 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":274,"skipped":4371,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:29:29.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Feb 20 01:29:38.477: INFO: Successfully updated pod "labelsupdateaa093cf1-b5ae-4f9e-ba18-d101b7de02a0"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:29:40.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3218" for this suite.

• [SLOW TEST:11.072 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":280,"completed":275,"skipped":4405,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:29:40.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Feb 20 01:29:40.694: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 20 01:29:40.742: INFO: Waiting for terminating namespaces to be deleted...
Feb 20 01:29:40.745: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 20 01:29:40.751: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 20 01:29:40.751: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 20 01:29:40.751: INFO: pod-exec-websocket-197c4b84-b1a0-4039-9c37-e389051dfec0 from pods-7064 started at 2020-02-20 01:29:10 +0000 UTC (1 container statuses recorded)
Feb 20 01:29:40.751: INFO: 	Container main ready: true, restart count 0
Feb 20 01:29:40.751: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 20 01:29:40.751: INFO: 	Container weave ready: true, restart count 1
Feb 20 01:29:40.751: INFO: 	Container weave-npc ready: true, restart count 0
Feb 20 01:29:40.751: INFO: labelsupdateaa093cf1-b5ae-4f9e-ba18-d101b7de02a0 from projected-3218 started at 2020-02-20 01:29:29 +0000 UTC (1 container statuses recorded)
Feb 20 01:29:40.751: INFO: 	Container client-container ready: true, restart count 0
Feb 20 01:29:40.751: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 20 01:29:40.768: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 20 01:29:40.769: INFO: 	Container kube-scheduler ready: true, restart count 18
Feb 20 01:29:40.769: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 20 01:29:40.769: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 20 01:29:40.769: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 20 01:29:40.769: INFO: 	Container etcd ready: true, restart count 1
Feb 20 01:29:40.769: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 20 01:29:40.769: INFO: 	Container coredns ready: true, restart count 0
Feb 20 01:29:40.769: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 20 01:29:40.769: INFO: 	Container coredns ready: true, restart count 0
Feb 20 01:29:40.769: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 20 01:29:40.769: INFO: 	Container kube-controller-manager ready: true, restart count 14
Feb 20 01:29:40.769: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 20 01:29:40.769: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 20 01:29:40.769: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 20 01:29:40.769: INFO: 	Container weave ready: true, restart count 0
Feb 20 01:29:40.769: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-d3a25800-3a35-4bf1-842f-2ca01cd938bf 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-d3a25800-3a35-4bf1-842f-2ca01cd938bf off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-d3a25800-3a35-4bf1-842f-2ca01cd938bf
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:30:19.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-35" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:38.648 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":280,"completed":276,"skipped":4405,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:30:19.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-615e1143-8bf0-4b36-b5a6-8c1ae8bdc3d9
STEP: Creating a pod to test consume configMaps
Feb 20 01:30:19.319: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9972c764-a66a-47f9-a228-49088d06e0cd" in namespace "projected-5293" to be "success or failure"
Feb 20 01:30:19.338: INFO: Pod "pod-projected-configmaps-9972c764-a66a-47f9-a228-49088d06e0cd": Phase="Pending", Reason="", readiness=false. Elapsed: 18.70125ms
Feb 20 01:30:21.344: INFO: Pod "pod-projected-configmaps-9972c764-a66a-47f9-a228-49088d06e0cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025231156s
Feb 20 01:30:23.351: INFO: Pod "pod-projected-configmaps-9972c764-a66a-47f9-a228-49088d06e0cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031791815s
Feb 20 01:30:25.355: INFO: Pod "pod-projected-configmaps-9972c764-a66a-47f9-a228-49088d06e0cd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035970696s
Feb 20 01:30:27.360: INFO: Pod "pod-projected-configmaps-9972c764-a66a-47f9-a228-49088d06e0cd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04101107s
Feb 20 01:30:29.365: INFO: Pod "pod-projected-configmaps-9972c764-a66a-47f9-a228-49088d06e0cd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.046363673s
Feb 20 01:30:31.372: INFO: Pod "pod-projected-configmaps-9972c764-a66a-47f9-a228-49088d06e0cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.053421149s
STEP: Saw pod success
Feb 20 01:30:31.373: INFO: Pod "pod-projected-configmaps-9972c764-a66a-47f9-a228-49088d06e0cd" satisfied condition "success or failure"
Feb 20 01:30:31.376: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-9972c764-a66a-47f9-a228-49088d06e0cd container projected-configmap-volume-test: 
STEP: delete the pod
Feb 20 01:30:31.474: INFO: Waiting for pod pod-projected-configmaps-9972c764-a66a-47f9-a228-49088d06e0cd to disappear
Feb 20 01:30:31.487: INFO: Pod pod-projected-configmaps-9972c764-a66a-47f9-a228-49088d06e0cd no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:30:31.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5293" for this suite.

• [SLOW TEST:12.298 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":277,"skipped":4429,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:30:31.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-a30b921d-577f-435e-a9b3-b5a7ac887b66
STEP: Creating a pod to test consume configMaps
Feb 20 01:30:31.882: INFO: Waiting up to 5m0s for pod "pod-configmaps-c7d748b1-bf4a-47b5-b9c9-b36917c80a2f" in namespace "configmap-2696" to be "success or failure"
Feb 20 01:30:31.988: INFO: Pod "pod-configmaps-c7d748b1-bf4a-47b5-b9c9-b36917c80a2f": Phase="Pending", Reason="", readiness=false. Elapsed: 105.696433ms
Feb 20 01:30:33.995: INFO: Pod "pod-configmaps-c7d748b1-bf4a-47b5-b9c9-b36917c80a2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112718368s
Feb 20 01:30:36.000: INFO: Pod "pod-configmaps-c7d748b1-bf4a-47b5-b9c9-b36917c80a2f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117627427s
Feb 20 01:30:38.007: INFO: Pod "pod-configmaps-c7d748b1-bf4a-47b5-b9c9-b36917c80a2f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.124899462s
Feb 20 01:30:40.012: INFO: Pod "pod-configmaps-c7d748b1-bf4a-47b5-b9c9-b36917c80a2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.130028151s
STEP: Saw pod success
Feb 20 01:30:40.013: INFO: Pod "pod-configmaps-c7d748b1-bf4a-47b5-b9c9-b36917c80a2f" satisfied condition "success or failure"
Feb 20 01:30:40.015: INFO: Trying to get logs from node jerma-node pod pod-configmaps-c7d748b1-bf4a-47b5-b9c9-b36917c80a2f container configmap-volume-test: 
STEP: delete the pod
Feb 20 01:30:40.093: INFO: Waiting for pod pod-configmaps-c7d748b1-bf4a-47b5-b9c9-b36917c80a2f to disappear
Feb 20 01:30:40.150: INFO: Pod pod-configmaps-c7d748b1-bf4a-47b5-b9c9-b36917c80a2f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:30:40.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2696" for this suite.

• [SLOW TEST:8.661 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":280,"completed":278,"skipped":4447,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 20 01:30:40.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-0a2d6a35-26ea-491c-aab8-fdc5e98e3855
STEP: Creating a pod to test consume configMaps
Feb 20 01:30:40.324: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-39a5ffa4-7e17-417c-9509-b8a330194a98" in namespace "projected-1270" to be "success or failure"
Feb 20 01:30:40.348: INFO: Pod "pod-projected-configmaps-39a5ffa4-7e17-417c-9509-b8a330194a98": Phase="Pending", Reason="", readiness=false. Elapsed: 23.426769ms
Feb 20 01:30:42.392: INFO: Pod "pod-projected-configmaps-39a5ffa4-7e17-417c-9509-b8a330194a98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066978068s
Feb 20 01:30:44.410: INFO: Pod "pod-projected-configmaps-39a5ffa4-7e17-417c-9509-b8a330194a98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084891952s
Feb 20 01:30:46.421: INFO: Pod "pod-projected-configmaps-39a5ffa4-7e17-417c-9509-b8a330194a98": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096331869s
Feb 20 01:30:48.426: INFO: Pod "pod-projected-configmaps-39a5ffa4-7e17-417c-9509-b8a330194a98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.101147095s
STEP: Saw pod success
Feb 20 01:30:48.426: INFO: Pod "pod-projected-configmaps-39a5ffa4-7e17-417c-9509-b8a330194a98" satisfied condition "success or failure"
Feb 20 01:30:48.433: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-39a5ffa4-7e17-417c-9509-b8a330194a98 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 20 01:30:48.561: INFO: Waiting for pod pod-projected-configmaps-39a5ffa4-7e17-417c-9509-b8a330194a98 to disappear
Feb 20 01:30:48.592: INFO: Pod pod-projected-configmaps-39a5ffa4-7e17-417c-9509-b8a330194a98 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 20 01:30:48.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1270" for this suite.

• [SLOW TEST:8.441 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":279,"skipped":4537,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSFeb 20 01:30:48.607: INFO: Running AfterSuite actions on all nodes
Feb 20 01:30:48.607: INFO: Running AfterSuite actions on node 1
Feb 20 01:30:48.607: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":280,"completed":279,"skipped":4565,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}


Summarizing 1 Failure:

[Fail] [sig-cli] Kubectl client Guestbook application [It] should create and stop a working application  [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2339

Ran 280 of 4845 Specs in 6723.923 seconds
FAIL! -- 279 Passed | 1 Failed | 0 Pending | 4565 Skipped
--- FAIL: TestE2E (6724.03s)
FAIL