I0511 20:42:58.225335 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0511 20:42:58.225513 7 e2e.go:124] Starting e2e run "4022ebdc-0385-4296-8111-b3eb82374338" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589229777 - Will randomize all specs Will run 275 of 4992 specs May 11 20:42:58.288: INFO: >>> kubeConfig: /root/.kube/config May 11 20:42:58.290: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 11 20:42:58.321: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 11 20:42:58.359: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 11 20:42:58.359: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 11 20:42:58.359: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 11 20:42:58.370: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 11 20:42:58.370: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 11 20:42:58.370: INFO: e2e test version: v1.18.2 May 11 20:42:58.371: INFO: kube-apiserver version: v1.18.2 May 11 20:42:58.371: INFO: >>> kubeConfig: /root/.kube/config May 11 20:42:58.375: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:42:58.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap May 11 20:42:58.464: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-63f081c0-abfe-45f6-a70b-8d5b73b4d64d STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:43:08.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-242" for this suite. • [SLOW TEST:10.470 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":1,"skipped":18,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:43:08.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 11 20:43:10.198: INFO: >>> kubeConfig: /root/.kube/config May 11 20:43:13.222: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:43:24.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5607" for this suite. • [SLOW TEST:16.196 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":2,"skipped":25,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:43:25.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 11 20:43:48.269: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 20:43:48.482: INFO: Pod pod-with-prestop-exec-hook still exists May 11 20:43:50.482: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 20:43:50.485: INFO: Pod pod-with-prestop-exec-hook still exists May 11 20:43:52.482: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 20:43:52.493: INFO: Pod pod-with-prestop-exec-hook still exists May 11 20:43:54.482: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 20:43:54.853: INFO: Pod pod-with-prestop-exec-hook still exists May 11 20:43:56.482: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 20:43:56.799: INFO: Pod pod-with-prestop-exec-hook still exists May 11 20:43:58.482: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 20:43:58.536: INFO: Pod pod-with-prestop-exec-hook still exists May 11 20:44:00.482: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 20:44:00.752: INFO: Pod pod-with-prestop-exec-hook still exists May 11 20:44:02.482: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 20:44:02.559: INFO: Pod pod-with-prestop-exec-hook still exists May 11 20:44:04.482: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 20:44:04.486: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:44:04.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2199" for this suite. • [SLOW TEST:39.457 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":3,"skipped":113,"failed":0} SSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:44:04.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token May 11 20:44:05.753: INFO: created pod pod-service-account-defaultsa May 11 20:44:05.753: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 11 20:44:05.865: INFO: created pod pod-service-account-mountsa May 11 20:44:05.865: INFO: pod pod-service-account-mountsa service account token volume mount: true May 11 20:44:05.955: INFO: created pod pod-service-account-nomountsa May 11 20:44:05.955: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 11 20:44:06.092: INFO: created pod pod-service-account-defaultsa-mountspec May 11 20:44:06.092: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 11 20:44:06.979: INFO: created pod pod-service-account-mountsa-mountspec May 11 20:44:06.979: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 11 20:44:07.042: INFO: created pod pod-service-account-nomountsa-mountspec May 11 20:44:07.042: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 11 20:44:08.315: INFO: created pod pod-service-account-defaultsa-nomountspec May 11 20:44:08.315: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 11 20:44:08.781: INFO: created pod pod-service-account-mountsa-nomountspec May 11 20:44:08.781: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 11 20:44:08.850: INFO: created pod pod-service-account-nomountsa-nomountspec May 11 20:44:08.850: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:44:08.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-265" for this suite. • [SLOW TEST:6.206 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":275,"completed":4,"skipped":124,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:44:10.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:44:44.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7464" for this suite. STEP: Destroying namespace "nsdeletetest-9859" for this suite. May 11 20:44:45.780: INFO: Namespace nsdeletetest-9859 was already deleted STEP: Destroying namespace "nsdeletetest-5069" for this suite. • [SLOW TEST:35.151 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":5,"skipped":144,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:44:45.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:44:57.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4418" for this suite. • [SLOW TEST:11.785 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":6,"skipped":163,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:44:57.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 11 20:44:58.888: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 11 20:45:01.468: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:45:02.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4829" for this suite. • [SLOW TEST:5.049 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":7,"skipped":168,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:45:02.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-c37d846a-5289-480d-be50-a857ffce2579 STEP: Creating a pod to test consume configMaps May 11 20:45:04.352: INFO: Waiting up to 5m0s for pod "pod-configmaps-42f59bb6-46c6-40df-9f7c-aac61b9a9b8c" in namespace "configmap-4163" to be "Succeeded or Failed" May 11 20:45:04.865: INFO: Pod "pod-configmaps-42f59bb6-46c6-40df-9f7c-aac61b9a9b8c": Phase="Pending", Reason="", readiness=false. Elapsed: 512.81792ms May 11 20:45:07.099: INFO: Pod "pod-configmaps-42f59bb6-46c6-40df-9f7c-aac61b9a9b8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.746563097s May 11 20:45:09.212: INFO: Pod "pod-configmaps-42f59bb6-46c6-40df-9f7c-aac61b9a9b8c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.859985216s May 11 20:45:11.275: INFO: Pod "pod-configmaps-42f59bb6-46c6-40df-9f7c-aac61b9a9b8c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.923109957s May 11 20:45:13.476: INFO: Pod "pod-configmaps-42f59bb6-46c6-40df-9f7c-aac61b9a9b8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.123876471s STEP: Saw pod success May 11 20:45:13.476: INFO: Pod "pod-configmaps-42f59bb6-46c6-40df-9f7c-aac61b9a9b8c" satisfied condition "Succeeded or Failed" May 11 20:45:13.551: INFO: Trying to get logs from node kali-worker pod pod-configmaps-42f59bb6-46c6-40df-9f7c-aac61b9a9b8c container configmap-volume-test: STEP: delete the pod May 11 20:45:13.800: INFO: Waiting for pod pod-configmaps-42f59bb6-46c6-40df-9f7c-aac61b9a9b8c to disappear May 11 20:45:13.820: INFO: Pod pod-configmaps-42f59bb6-46c6-40df-9f7c-aac61b9a9b8c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:45:13.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4163" for this suite. • [SLOW TEST:11.256 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":8,"skipped":188,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:45:13.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:45:33.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1921" for this suite. • [SLOW TEST:19.207 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":9,"skipped":191,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:45:33.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 11 20:45:35.408: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 11 20:45:37.419: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826735, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826735, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826735, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826735, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 20:45:39.489: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826735, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826735, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826735, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826735, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 20:45:42.594: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 11 20:45:42.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:45:44.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6503" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:11.050 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":10,"skipped":195,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:45:44.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 20:45:46.236: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 20:45:48.458: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826746, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826746, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826746, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826746, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 20:45:50.461: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826746, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826746, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826746, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826746, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 20:45:53.516: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:46:06.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8868" for this suite. STEP: Destroying namespace "webhook-8868-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:22.080 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":11,"skipped":215,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:46:06.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0511 20:46:08.818005 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 20:46:08.818: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:46:08.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9445" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":12,"skipped":218,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:46:08.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 20:46:10.323: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 20:46:12.704: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826771, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826771, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826771, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826770, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 20:46:14.899: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826771, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826771, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826771, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826770, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 20:46:16.707: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826771, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826771, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826771, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826770, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 20:46:19.738: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 11 20:46:19.758: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:46:19.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5768" for this suite. STEP: Destroying namespace "webhook-5768-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.113 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":13,"skipped":234,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:46:19.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 20:46:21.288: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 20:46:23.299: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826781, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826781, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826781, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826781, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 20:46:26.334: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 11 20:46:26.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:46:28.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-545" for this suite. STEP: Destroying namespace "webhook-545-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.389 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":14,"skipped":253,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:46:29.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-f73c42a2-8ed9-4024-87af-c400377fb845 STEP: Creating a pod to test consume configMaps May 11 20:46:30.701: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1cfafa07-bdf0-430a-b867-963e13354a23" in namespace "projected-8076" to be "Succeeded or Failed" May 11 20:46:31.310: INFO: Pod "pod-projected-configmaps-1cfafa07-bdf0-430a-b867-963e13354a23": Phase="Pending", Reason="", readiness=false. Elapsed: 608.740702ms May 11 20:46:33.435: INFO: Pod "pod-projected-configmaps-1cfafa07-bdf0-430a-b867-963e13354a23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.733733608s May 11 20:46:35.568: INFO: Pod "pod-projected-configmaps-1cfafa07-bdf0-430a-b867-963e13354a23": Phase="Pending", Reason="", readiness=false. Elapsed: 4.866887421s May 11 20:46:37.853: INFO: Pod "pod-projected-configmaps-1cfafa07-bdf0-430a-b867-963e13354a23": Phase="Pending", Reason="", readiness=false. Elapsed: 7.151897538s May 11 20:46:39.856: INFO: Pod "pod-projected-configmaps-1cfafa07-bdf0-430a-b867-963e13354a23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.154820018s STEP: Saw pod success May 11 20:46:39.856: INFO: Pod "pod-projected-configmaps-1cfafa07-bdf0-430a-b867-963e13354a23" satisfied condition "Succeeded or Failed" May 11 20:46:39.858: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-1cfafa07-bdf0-430a-b867-963e13354a23 container projected-configmap-volume-test: STEP: delete the pod May 11 20:46:40.257: INFO: Waiting for pod pod-projected-configmaps-1cfafa07-bdf0-430a-b867-963e13354a23 to disappear May 11 20:46:40.299: INFO: Pod pod-projected-configmaps-1cfafa07-bdf0-430a-b867-963e13354a23 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:46:40.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8076" for this suite. • [SLOW TEST:11.004 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":15,"skipped":272,"failed":0} SSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:46:40.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 11 20:46:41.060: INFO: Created pod &Pod{ObjectMeta:{dns-9341 dns-9341 /api/v1/namespaces/dns-9341/pods/dns-9341 2f9aaf3e-3032-4ca7-83c7-cdc29c1dcdce 3507997 0 2020-05-11 20:46:41 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-05-11 20:46:41 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 67 111 110 102 105 103 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 115 101 114 118 101 114 115 34 58 123 125 44 34 102 58 115 101 97 114 99 104 101 115 34 58 123 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7mxnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7mxnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7mxnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 20:46:41.105: INFO: The status of Pod dns-9341 is Pending, waiting for it to be Running (with Ready = true) May 11 20:46:43.207: INFO: The status of Pod dns-9341 is Pending, waiting for it to be Running (with Ready = true) May 11 20:46:45.109: INFO: The status of Pod dns-9341 is Pending, waiting for it to be Running (with Ready = true) May 11 20:46:47.118: INFO: The status of Pod dns-9341 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... May 11 20:46:47.118: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-9341 PodName:dns-9341 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 20:46:47.118: INFO: >>> kubeConfig: /root/.kube/config I0511 20:46:47.171677 7 log.go:172] (0xc00200cbb0) (0xc002aa6460) Create stream I0511 20:46:47.171708 7 log.go:172] (0xc00200cbb0) (0xc002aa6460) Stream added, broadcasting: 1 I0511 20:46:47.178634 7 log.go:172] (0xc00200cbb0) Reply frame received for 1 I0511 20:46:47.178678 7 log.go:172] (0xc00200cbb0) (0xc00299bc20) Create stream I0511 20:46:47.178689 7 log.go:172] (0xc00200cbb0) (0xc00299bc20) Stream added, broadcasting: 3 I0511 20:46:47.179620 7 log.go:172] (0xc00200cbb0) Reply frame received for 3 I0511 20:46:47.179647 7 log.go:172] (0xc00200cbb0) (0xc0028ac320) Create stream I0511 20:46:47.179655 7 log.go:172] (0xc00200cbb0) (0xc0028ac320) Stream added, broadcasting: 5 I0511 20:46:47.180483 7 log.go:172] (0xc00200cbb0) Reply frame received for 5 I0511 20:46:47.225410 7 log.go:172] (0xc00200cbb0) Data frame received for 3 I0511 20:46:47.225429 7 log.go:172] (0xc00299bc20) (3) Data frame handling I0511 20:46:47.225438 7 log.go:172] (0xc00299bc20) (3) Data frame sent I0511 20:46:47.227352 7 log.go:172] (0xc00200cbb0) Data frame received for 3 I0511 20:46:47.227372 7 log.go:172] (0xc00299bc20) (3) Data frame handling I0511 20:46:47.227992 7 log.go:172] (0xc00200cbb0) Data frame received for 5 I0511 20:46:47.228009 7 log.go:172] (0xc0028ac320) (5) Data frame handling I0511 20:46:47.230461 7 log.go:172] (0xc00200cbb0) Data frame received for 1 I0511 20:46:47.230489 7 log.go:172] (0xc002aa6460) (1) Data frame handling I0511 20:46:47.230508 7 log.go:172] (0xc002aa6460) (1) Data frame sent I0511 20:46:47.230540 7 log.go:172] (0xc00200cbb0) (0xc002aa6460) Stream removed, broadcasting: 1 I0511 20:46:47.230555 7 log.go:172] (0xc00200cbb0) Go away received I0511 20:46:47.230848 7 log.go:172] (0xc00200cbb0) (0xc002aa6460) Stream removed, broadcasting: 1 I0511 20:46:47.230860 7 log.go:172] (0xc00200cbb0) (0xc00299bc20) Stream removed, broadcasting: 3 I0511 20:46:47.230865 7 log.go:172] (0xc00200cbb0) (0xc0028ac320) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 11 20:46:47.230: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-9341 PodName:dns-9341 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 20:46:47.230: INFO: >>> kubeConfig: /root/.kube/config I0511 20:46:47.379275 7 log.go:172] (0xc00200ce70) (0xc002aa6500) Create stream I0511 20:46:47.379297 7 log.go:172] (0xc00200ce70) (0xc002aa6500) Stream added, broadcasting: 1 I0511 20:46:47.380806 7 log.go:172] (0xc00200ce70) Reply frame received for 1 I0511 20:46:47.380825 7 log.go:172] (0xc00200ce70) (0xc002aa65a0) Create stream I0511 20:46:47.380831 7 log.go:172] (0xc00200ce70) (0xc002aa65a0) Stream added, broadcasting: 3 I0511 20:46:47.381463 7 log.go:172] (0xc00200ce70) Reply frame received for 3 I0511 20:46:47.381480 7 log.go:172] (0xc00200ce70) (0xc00299bea0) Create stream I0511 20:46:47.381486 7 log.go:172] (0xc00200ce70) (0xc00299bea0) Stream added, broadcasting: 5 I0511 20:46:47.381972 7 log.go:172] (0xc00200ce70) Reply frame received for 5 I0511 20:46:47.493863 7 log.go:172] (0xc00200ce70) Data frame received for 3 I0511 20:46:47.493894 7 log.go:172] (0xc002aa65a0) (3) Data frame handling I0511 20:46:47.493913 7 log.go:172] (0xc002aa65a0) (3) Data frame sent I0511 20:46:47.494950 7 log.go:172] (0xc00200ce70) Data frame received for 5 I0511 20:46:47.494969 7 log.go:172] (0xc00299bea0) (5) Data frame handling I0511 20:46:47.495208 7 log.go:172] (0xc00200ce70) Data frame received for 3 I0511 20:46:47.495228 7 log.go:172] (0xc002aa65a0) (3) Data frame handling I0511 20:46:47.496602 7 log.go:172] (0xc00200ce70) Data frame received for 1 I0511 20:46:47.496623 7 log.go:172] (0xc002aa6500) (1) Data frame handling I0511 20:46:47.496656 7 log.go:172] (0xc002aa6500) (1) Data frame sent I0511 20:46:47.497057 7 log.go:172] (0xc00200ce70) (0xc002aa6500) Stream removed, broadcasting: 1 I0511 20:46:47.497393 7 log.go:172] (0xc00200ce70) Go away received I0511 20:46:47.497455 7 log.go:172] (0xc00200ce70) (0xc002aa6500) Stream removed, broadcasting: 1 I0511 20:46:47.497488 7 log.go:172] (0xc00200ce70) (0xc002aa65a0) Stream removed, broadcasting: 3 I0511 20:46:47.497512 7 log.go:172] (0xc00200ce70) (0xc00299bea0) Stream removed, broadcasting: 5 May 11 20:46:47.497: INFO: Deleting pod dns-9341... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:46:47.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9341" for this suite. • [SLOW TEST:7.535 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":16,"skipped":275,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:46:47.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 20:46:50.333: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 20:46:52.986: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826810, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826810, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826810, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826810, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 20:46:55.585: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826810, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826810, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826810, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826810, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 20:46:56.990: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826810, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826810, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826810, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826810, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 20:47:00.047: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:47:00.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8911" for this suite. STEP: Destroying namespace "webhook-8911-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.963 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":17,"skipped":286,"failed":0} S ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:47:00.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 11 20:47:01.090: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 11 20:47:06.122: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 11 20:47:06.122: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 May 11 20:47:06.217: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-3882 /apis/apps/v1/namespaces/deployment-3882/deployments/test-cleanup-deployment 00f4b5c3-fdd1-4dea-86c9-9aef06707c77 3508172 1 2020-05-11 20:47:06 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-05-11 20:47:06 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003056328 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 11 20:47:06.294: INFO: New ReplicaSet "test-cleanup-deployment-b4867b47f" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-b4867b47f deployment-3882 /apis/apps/v1/namespaces/deployment-3882/replicasets/test-cleanup-deployment-b4867b47f 8187beb8-7ba0-4973-9fcc-1b2596a2255c 3508180 1 2020-05-11 20:47:06 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 00f4b5c3-fdd1-4dea-86c9-9aef06707c77 0xc0031e4eb0 0xc0031e4eb1}] [] [{kube-controller-manager Update apps/v1 2020-05-11 20:47:06 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 48 102 52 98 53 99 51 45 102 100 100 49 45 52 100 101 97 45 56 54 99 57 45 57 97 101 102 48 54 55 48 55 99 55 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: b4867b47f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0031e4f28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 11 20:47:06.294: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 11 20:47:06.295: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-3882 /apis/apps/v1/namespaces/deployment-3882/replicasets/test-cleanup-controller 7eabdf1b-d00e-498d-b388-a38c3dd5539a 3508173 1 2020-05-11 20:47:01 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 00f4b5c3-fdd1-4dea-86c9-9aef06707c77 0xc0031e4d97 0xc0031e4d98}] [] [{e2e.test Update apps/v1 2020-05-11 20:47:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-11 20:47:06 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 48 102 52 98 53 99 51 45 102 100 100 49 45 52 100 101 97 45 56 54 99 57 45 57 97 101 102 48 54 55 48 55 99 55 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0031e4e38 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 11 20:47:06.456: INFO: Pod "test-cleanup-controller-96dmx" is available: &Pod{ObjectMeta:{test-cleanup-controller-96dmx test-cleanup-controller- deployment-3882 /api/v1/namespaces/deployment-3882/pods/test-cleanup-controller-96dmx c4df780d-3135-4d35-b005-b2baccf102c9 3508155 0 2020-05-11 20:47:01 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 7eabdf1b-d00e-498d-b388-a38c3dd5539a 0xc0031e5447 0xc0031e5448}] [] [{kube-controller-manager Update v1 2020-05-11 20:47:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 101 97 98 100 102 49 98 45 100 48 48 101 45 52 57 56 100 45 98 51 56 56 45 97 51 56 99 51 100 100 53 53 51 57 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 20:47:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 53 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xppqk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xppqk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xppqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:47:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:47:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:47:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:47:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.53,StartTime:2020-05-11 20:47:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 20:47:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://fcc4129a523a424d1f2a233ac3e1bc339d4b94ec26453c4e248c28f474461e52,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.53,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 20:47:06.457: INFO: Pod "test-cleanup-deployment-b4867b47f-sst8v" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-b4867b47f-sst8v test-cleanup-deployment-b4867b47f- deployment-3882 /api/v1/namespaces/deployment-3882/pods/test-cleanup-deployment-b4867b47f-sst8v 29c98d03-6853-4a4f-aa51-df983e4944f0 3508178 0 2020-05-11 20:47:06 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-b4867b47f 8187beb8-7ba0-4973-9fcc-1b2596a2255c 0xc0031e5630 0xc0031e5631}] [] [{kube-controller-manager Update v1 2020-05-11 20:47:06 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 49 56 55 98 101 98 56 45 55 98 97 48 45 52 57 55 51 45 57 102 99 99 45 49 98 50 53 57 54 97 50 50 53 53 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xppqk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xppqk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xppqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:47:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:47:06.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3882" for this suite. • [SLOW TEST:5.753 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":18,"skipped":287,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:47:06.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-secret-wjkl STEP: Creating a pod to test atomic-volume-subpath May 11 20:47:06.934: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-wjkl" in namespace "subpath-1112" to be "Succeeded or Failed" May 11 20:47:07.195: INFO: Pod "pod-subpath-test-secret-wjkl": Phase="Pending", Reason="", readiness=false. Elapsed: 260.545192ms May 11 20:47:09.540: INFO: Pod "pod-subpath-test-secret-wjkl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.605327246s May 11 20:47:11.543: INFO: Pod "pod-subpath-test-secret-wjkl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.609154827s May 11 20:47:13.548: INFO: Pod "pod-subpath-test-secret-wjkl": Phase="Running", Reason="", readiness=true. Elapsed: 6.613600316s May 11 20:47:15.552: INFO: Pod "pod-subpath-test-secret-wjkl": Phase="Running", Reason="", readiness=true. Elapsed: 8.617378922s May 11 20:47:17.554: INFO: Pod "pod-subpath-test-secret-wjkl": Phase="Running", Reason="", readiness=true. Elapsed: 10.619884134s May 11 20:47:19.557: INFO: Pod "pod-subpath-test-secret-wjkl": Phase="Running", Reason="", readiness=true. Elapsed: 12.62285436s May 11 20:47:21.599: INFO: Pod "pod-subpath-test-secret-wjkl": Phase="Running", Reason="", readiness=true. Elapsed: 14.664680621s May 11 20:47:23.604: INFO: Pod "pod-subpath-test-secret-wjkl": Phase="Running", Reason="", readiness=true. Elapsed: 16.669329855s May 11 20:47:25.607: INFO: Pod "pod-subpath-test-secret-wjkl": Phase="Running", Reason="", readiness=true. Elapsed: 18.672935061s May 11 20:47:27.611: INFO: Pod "pod-subpath-test-secret-wjkl": Phase="Running", Reason="", readiness=true. Elapsed: 20.676827732s May 11 20:47:29.614: INFO: Pod "pod-subpath-test-secret-wjkl": Phase="Running", Reason="", readiness=true. Elapsed: 22.68021882s May 11 20:47:31.619: INFO: Pod "pod-subpath-test-secret-wjkl": Phase="Running", Reason="", readiness=true. Elapsed: 24.684383933s May 11 20:47:33.778: INFO: Pod "pod-subpath-test-secret-wjkl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.843852584s STEP: Saw pod success May 11 20:47:33.778: INFO: Pod "pod-subpath-test-secret-wjkl" satisfied condition "Succeeded or Failed" May 11 20:47:33.781: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-secret-wjkl container test-container-subpath-secret-wjkl: STEP: delete the pod May 11 20:47:33.954: INFO: Waiting for pod pod-subpath-test-secret-wjkl to disappear May 11 20:47:34.106: INFO: Pod pod-subpath-test-secret-wjkl no longer exists STEP: Deleting pod pod-subpath-test-secret-wjkl May 11 20:47:34.106: INFO: Deleting pod "pod-subpath-test-secret-wjkl" in namespace "subpath-1112" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:47:34.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1112" for this suite. • [SLOW TEST:27.532 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":19,"skipped":311,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:47:34.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 11 20:47:34.276: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:47:34.281: INFO: Number of nodes with available pods: 0 May 11 20:47:34.281: INFO: Node kali-worker is running more than one daemon pod May 11 20:47:35.304: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:47:35.343: INFO: Number of nodes with available pods: 0 May 11 20:47:35.343: INFO: Node kali-worker is running more than one daemon pod May 11 20:47:36.287: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:47:36.291: INFO: Number of nodes with available pods: 0 May 11 20:47:36.291: INFO: Node kali-worker is running more than one daemon pod May 11 20:47:37.310: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:47:37.313: INFO: Number of nodes with available pods: 0 May 11 20:47:37.313: INFO: Node kali-worker is running more than one daemon pod May 11 20:47:38.287: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:47:38.291: INFO: Number of nodes with available pods: 0 May 11 20:47:38.291: INFO: Node kali-worker is running more than one daemon pod May 11 20:47:39.287: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:47:39.294: INFO: Number of nodes with available pods: 0 May 11 20:47:39.294: INFO: Node kali-worker is running more than one daemon pod May 11 20:47:40.287: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:47:40.292: INFO: Number of nodes with available pods: 2 May 11 20:47:40.292: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 11 20:47:40.328: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:47:40.332: INFO: Number of nodes with available pods: 1 May 11 20:47:40.332: INFO: Node kali-worker2 is running more than one daemon pod May 11 20:47:41.338: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:47:41.342: INFO: Number of nodes with available pods: 1 May 11 20:47:41.342: INFO: Node kali-worker2 is running more than one daemon pod May 11 20:47:42.368: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:47:42.403: INFO: Number of nodes with available pods: 1 May 11 20:47:42.403: INFO: Node kali-worker2 is running more than one daemon pod May 11 20:47:43.448: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:47:43.452: INFO: Number of nodes with available pods: 1 May 11 20:47:43.452: INFO: Node kali-worker2 is running more than one daemon pod May 11 20:47:44.337: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:47:44.340: INFO: Number of nodes with available pods: 1 May 11 20:47:44.340: INFO: Node kali-worker2 is running more than one daemon pod May 11 20:47:45.337: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:47:45.340: INFO: Number of nodes with available pods: 1 May 11 20:47:45.340: INFO: Node kali-worker2 is running more than one daemon pod May 11 20:47:46.430: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:47:46.433: INFO: Number of nodes with available pods: 1 May 11 20:47:46.433: INFO: Node kali-worker2 is running more than one daemon pod May 11 20:47:47.337: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:47:47.341: INFO: Number of nodes with available pods: 1 May 11 20:47:47.341: INFO: Node kali-worker2 is running more than one daemon pod May 11 20:47:48.586: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:47:48.589: INFO: Number of nodes with available pods: 1 May 11 20:47:48.589: INFO: Node kali-worker2 is running more than one daemon pod May 11 20:47:49.393: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:47:49.396: INFO: Number of nodes with available pods: 1 May 11 20:47:49.396: INFO: Node kali-worker2 is running more than one daemon pod May 11 20:47:50.338: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:47:50.370: INFO: Number of nodes with available pods: 1 May 11 20:47:50.370: INFO: Node kali-worker2 is running more than one daemon pod May 11 20:47:51.338: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:47:51.343: INFO: Number of nodes with available pods: 2 May 11 20:47:51.343: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8993, will wait for the garbage collector to delete the pods May 11 20:47:51.406: INFO: Deleting DaemonSet.extensions daemon-set took: 7.423691ms May 11 20:47:51.806: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.293923ms May 11 20:47:55.610: INFO: Number of nodes with available pods: 0 May 11 20:47:55.610: INFO: Number of running nodes: 0, number of available pods: 0 May 11 20:47:55.616: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8993/daemonsets","resourceVersion":"3508433"},"items":null} May 11 20:47:55.620: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8993/pods","resourceVersion":"3508433"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:47:55.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8993" for this suite. • [SLOW TEST:21.517 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":20,"skipped":367,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:47:55.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with configMap that has name projected-configmap-test-upd-eb54642a-d448-46cc-b3bf-40d2e10d08e3 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-eb54642a-d448-46cc-b3bf-40d2e10d08e3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:48:02.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3505" for this suite. • [SLOW TEST:6.538 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":21,"skipped":372,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:48:02.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 11 20:48:02.228: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-827' May 11 20:48:05.143: INFO: stderr: "" May 11 20:48:05.143: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 11 20:48:05.143: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-827' May 11 20:48:05.439: INFO: stderr: "" May 11 20:48:05.439: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 11 20:48:06.443: INFO: Selector matched 1 pods for map[app:agnhost] May 11 20:48:06.443: INFO: Found 0 / 1 May 11 20:48:07.441: INFO: Selector matched 1 pods for map[app:agnhost] May 11 20:48:07.441: INFO: Found 0 / 1 May 11 20:48:08.478: INFO: Selector matched 1 pods for map[app:agnhost] May 11 20:48:08.479: INFO: Found 1 / 1 May 11 20:48:08.479: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 11 20:48:08.502: INFO: Selector matched 1 pods for map[app:agnhost] May 11 20:48:08.502: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 11 20:48:08.502: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe pod agnhost-master-hwzlz --namespace=kubectl-827' May 11 20:48:08.629: INFO: stderr: "" May 11 20:48:08.629: INFO: stdout: "Name: agnhost-master-hwzlz\nNamespace: kubectl-827\nPriority: 0\nNode: kali-worker/172.17.0.15\nStart Time: Mon, 11 May 2020 20:48:05 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.7\nIPs:\n IP: 10.244.2.7\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://952f74424303a3d7dd29249eca4f2549eca228ea3bb5a3940c47c9a551dcd242\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 11 May 2020 20:48:08 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-v78qw (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-v78qw:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-v78qw\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-827/agnhost-master-hwzlz to kali-worker\n Normal Pulled 2s kubelet, kali-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 0s kubelet, kali-worker Created container agnhost-master\n Normal Started 0s kubelet, kali-worker Started container agnhost-master\n" May 11 20:48:08.630: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-827' May 11 20:48:08.914: INFO: stderr: "" May 11 20:48:08.914: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-827\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-master-hwzlz\n" May 11 20:48:08.915: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-827' May 11 20:48:09.065: INFO: stderr: "" May 11 20:48:09.065: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-827\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.63.192\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.7:6379\nSession Affinity: None\nEvents: \n" May 11 20:48:09.069: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe node kali-control-plane' May 11 20:48:09.184: INFO: stderr: "" May 11 20:48:09.184: INFO: stdout: "Name: kali-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=kali-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 29 Apr 2020 09:30:59 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: kali-control-plane\n AcquireTime: \n RenewTime: Mon, 11 May 2020 20:48:08 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 11 May 2020 20:45:50 +0000 Wed, 29 Apr 2020 09:30:56 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 11 May 2020 20:45:50 +0000 Wed, 29 Apr 2020 09:30:56 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 11 May 2020 20:45:50 +0000 Wed, 29 Apr 2020 09:30:56 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 11 May 2020 20:45:50 +0000 Wed, 29 Apr 2020 09:31:34 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.19\n Hostname: kali-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 2146cf85bed648199604ab2e0e9ac609\n System UUID: e83c0db4-babe-44fc-9dad-b5eeae6d23fd\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.18.2\n Kube-Proxy Version: v1.18.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-rvq2k 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 12d\n kube-system coredns-66bff467f8-w6zxd 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 12d\n kube-system etcd-kali-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12d\n kube-system kindnet-65djz 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 12d\n kube-system kube-apiserver-kali-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 12d\n kube-system kube-controller-manager-kali-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 12d\n kube-system kube-proxy-pnhtq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12d\n kube-system kube-scheduler-kali-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 12d\n local-path-storage local-path-provisioner-bd4bb6b75-6l9ph 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" May 11 20:48:09.184: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe namespace kubectl-827' May 11 20:48:09.295: INFO: stderr: "" May 11 20:48:09.295: INFO: stdout: "Name: kubectl-827\nLabels: e2e-framework=kubectl\n e2e-run=4022ebdc-0385-4296-8111-b3eb82374338\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:48:09.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-827" for this suite. • [SLOW TEST:7.126 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:978 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":275,"completed":22,"skipped":382,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:48:09.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 11 20:48:09.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 11 20:48:09.963: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T20:48:09Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-11T20:48:09Z]] name:name1 resourceVersion:3508558 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:c20ea689-5501-4900-a188-0a09cc8d50a5] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 11 20:48:19.999: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T20:48:19Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-11T20:48:19Z]] name:name2 resourceVersion:3508608 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:438102bc-087e-48d2-b8a4-bd20189233e4] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 11 20:48:30.005: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T20:48:09Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-11T20:48:30Z]] name:name1 resourceVersion:3508638 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:c20ea689-5501-4900-a188-0a09cc8d50a5] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 11 20:48:40.047: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T20:48:19Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-11T20:48:40Z]] name:name2 resourceVersion:3508669 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:438102bc-087e-48d2-b8a4-bd20189233e4] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 11 20:48:50.056: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T20:48:09Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-11T20:48:30Z]] name:name1 resourceVersion:3508698 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:c20ea689-5501-4900-a188-0a09cc8d50a5] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 11 20:49:00.065: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T20:48:19Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-11T20:48:40Z]] name:name2 resourceVersion:3508728 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:438102bc-087e-48d2-b8a4-bd20189233e4] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:49:10.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-8707" for this suite. • [SLOW TEST:61.337 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":23,"skipped":400,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:49:10.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-a898042c-72f1-409d-a08a-da7a8128bb55 STEP: Creating a pod to test consume configMaps May 11 20:49:10.712: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-657689e0-c90e-4a8e-9965-dfb930f67797" in namespace "projected-7242" to be "Succeeded or Failed" May 11 20:49:10.733: INFO: Pod "pod-projected-configmaps-657689e0-c90e-4a8e-9965-dfb930f67797": Phase="Pending", Reason="", readiness=false. Elapsed: 21.153715ms May 11 20:49:12.795: INFO: Pod "pod-projected-configmaps-657689e0-c90e-4a8e-9965-dfb930f67797": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082974405s May 11 20:49:14.800: INFO: Pod "pod-projected-configmaps-657689e0-c90e-4a8e-9965-dfb930f67797": Phase="Running", Reason="", readiness=true. Elapsed: 4.088276061s May 11 20:49:16.804: INFO: Pod "pod-projected-configmaps-657689e0-c90e-4a8e-9965-dfb930f67797": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.091887255s STEP: Saw pod success May 11 20:49:16.804: INFO: Pod "pod-projected-configmaps-657689e0-c90e-4a8e-9965-dfb930f67797" satisfied condition "Succeeded or Failed" May 11 20:49:16.807: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-657689e0-c90e-4a8e-9965-dfb930f67797 container projected-configmap-volume-test: STEP: delete the pod May 11 20:49:17.007: INFO: Waiting for pod pod-projected-configmaps-657689e0-c90e-4a8e-9965-dfb930f67797 to disappear May 11 20:49:17.016: INFO: Pod pod-projected-configmaps-657689e0-c90e-4a8e-9965-dfb930f67797 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:49:17.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7242" for this suite. • [SLOW TEST:6.387 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":24,"skipped":403,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:49:17.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:49:17.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-9698" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":25,"skipped":419,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:49:17.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 11 20:49:17.395: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-3156 I0511 20:49:17.446282 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3156, replica count: 1 I0511 20:49:18.496700 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 20:49:19.496952 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 20:49:20.497404 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 20:49:21.497630 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 20:49:22.497809 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 20:49:23.498075 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 20:49:23.632: INFO: Created: latency-svc-lcf9t May 11 20:49:23.646: INFO: Got endpoints: latency-svc-lcf9t [48.442918ms] May 11 20:49:23.775: INFO: Created: latency-svc-jpkdz May 11 20:49:23.796: INFO: Got endpoints: latency-svc-jpkdz [149.551731ms] May 11 20:49:23.834: INFO: Created: latency-svc-d96sk May 11 20:49:23.899: INFO: Got endpoints: latency-svc-d96sk [252.353011ms] May 11 20:49:23.933: INFO: Created: latency-svc-xlv8s May 11 20:49:23.951: INFO: Got endpoints: latency-svc-xlv8s [304.050404ms] May 11 20:49:24.071: INFO: Created: latency-svc-ht6jg May 11 20:49:24.104: INFO: Created: latency-svc-fkv2k May 11 20:49:24.104: INFO: Got endpoints: latency-svc-ht6jg [457.596286ms] May 11 20:49:24.136: INFO: Got endpoints: latency-svc-fkv2k [489.265213ms] May 11 20:49:24.226: INFO: Created: latency-svc-b87gl May 11 20:49:24.234: INFO: Got endpoints: latency-svc-b87gl [587.30984ms] May 11 20:49:24.297: INFO: Created: latency-svc-sdwkm May 11 20:49:24.312: INFO: Got endpoints: latency-svc-sdwkm [665.705465ms] May 11 20:49:24.370: INFO: Created: latency-svc-s8dn7 May 11 20:49:24.391: INFO: Got endpoints: latency-svc-s8dn7 [744.967449ms] May 11 20:49:24.436: INFO: Created: latency-svc-h9qs4 May 11 20:49:24.491: INFO: Got endpoints: latency-svc-h9qs4 [844.127247ms] May 11 20:49:24.507: INFO: Created: latency-svc-48wdh May 11 20:49:24.524: INFO: Got endpoints: latency-svc-48wdh [877.232321ms] May 11 20:49:24.549: INFO: Created: latency-svc-rhdqc May 11 20:49:24.566: INFO: Got endpoints: latency-svc-rhdqc [919.311595ms] May 11 20:49:24.586: INFO: Created: latency-svc-7gnbj May 11 20:49:24.635: INFO: Got endpoints: latency-svc-7gnbj [987.993692ms] May 11 20:49:24.641: INFO: Created: latency-svc-x4qmv May 11 20:49:24.658: INFO: Got endpoints: latency-svc-x4qmv [1.011099929s] May 11 20:49:24.726: INFO: Created: latency-svc-m7pkm May 11 20:49:24.801: INFO: Got endpoints: latency-svc-m7pkm [1.154469862s] May 11 20:49:24.828: INFO: Created: latency-svc-t6dwg May 11 20:49:24.843: INFO: Got endpoints: latency-svc-t6dwg [1.196438154s] May 11 20:49:24.868: INFO: Created: latency-svc-bgljd May 11 20:49:24.976: INFO: Got endpoints: latency-svc-bgljd [1.179669603s] May 11 20:49:24.995: INFO: Created: latency-svc-skct6 May 11 20:49:25.143: INFO: Got endpoints: latency-svc-skct6 [1.244507137s] May 11 20:49:25.216: INFO: Created: latency-svc-4thxb May 11 20:49:25.347: INFO: Got endpoints: latency-svc-4thxb [1.396199214s] May 11 20:49:25.385: INFO: Created: latency-svc-qwpn2 May 11 20:49:25.395: INFO: Got endpoints: latency-svc-qwpn2 [1.291060476s] May 11 20:49:25.491: INFO: Created: latency-svc-2pdcx May 11 20:49:25.522: INFO: Got endpoints: latency-svc-2pdcx [1.38602499s] May 11 20:49:25.584: INFO: Created: latency-svc-cmxp7 May 11 20:49:25.742: INFO: Got endpoints: latency-svc-cmxp7 [1.508025969s] May 11 20:49:25.747: INFO: Created: latency-svc-hmhw7 May 11 20:49:26.707: INFO: Got endpoints: latency-svc-hmhw7 [2.394778416s] May 11 20:49:26.716: INFO: Created: latency-svc-c226r May 11 20:49:27.001: INFO: Got endpoints: latency-svc-c226r [2.609803937s] May 11 20:49:27.095: INFO: Created: latency-svc-b7tjf May 11 20:49:27.526: INFO: Got endpoints: latency-svc-b7tjf [3.035752137s] May 11 20:49:27.741: INFO: Created: latency-svc-jz5sm May 11 20:49:27.781: INFO: Got endpoints: latency-svc-jz5sm [3.257424019s] May 11 20:49:27.903: INFO: Created: latency-svc-fxlg4 May 11 20:49:27.907: INFO: Got endpoints: latency-svc-fxlg4 [3.341053382s] May 11 20:49:28.303: INFO: Created: latency-svc-zl229 May 11 20:49:28.363: INFO: Got endpoints: latency-svc-zl229 [3.7283076s] May 11 20:49:28.502: INFO: Created: latency-svc-6rtxw May 11 20:49:28.951: INFO: Got endpoints: latency-svc-6rtxw [4.29308308s] May 11 20:49:28.977: INFO: Created: latency-svc-5gz8b May 11 20:49:29.012: INFO: Got endpoints: latency-svc-5gz8b [4.210409426s] May 11 20:49:29.354: INFO: Created: latency-svc-bzhbb May 11 20:49:29.559: INFO: Got endpoints: latency-svc-bzhbb [4.715837608s] May 11 20:49:29.586: INFO: Created: latency-svc-9t5s6 May 11 20:49:29.605: INFO: Got endpoints: latency-svc-9t5s6 [4.629658294s] May 11 20:49:30.204: INFO: Created: latency-svc-sdklf May 11 20:49:30.293: INFO: Got endpoints: latency-svc-sdklf [5.149453543s] May 11 20:49:30.388: INFO: Created: latency-svc-g8px9 May 11 20:49:30.421: INFO: Got endpoints: latency-svc-g8px9 [5.073407006s] May 11 20:49:30.586: INFO: Created: latency-svc-rglck May 11 20:49:30.601: INFO: Got endpoints: latency-svc-rglck [5.205947994s] May 11 20:49:30.672: INFO: Created: latency-svc-l9q64 May 11 20:49:30.741: INFO: Got endpoints: latency-svc-l9q64 [5.219580812s] May 11 20:49:30.787: INFO: Created: latency-svc-bbrcw May 11 20:49:30.964: INFO: Got endpoints: latency-svc-bbrcw [5.221427783s] May 11 20:49:31.003: INFO: Created: latency-svc-5ntql May 11 20:49:31.236: INFO: Got endpoints: latency-svc-5ntql [4.528255175s] May 11 20:49:31.310: INFO: Created: latency-svc-9gbfh May 11 20:49:31.587: INFO: Got endpoints: latency-svc-9gbfh [4.585655622s] May 11 20:49:31.602: INFO: Created: latency-svc-gw5wd May 11 20:49:31.783: INFO: Got endpoints: latency-svc-gw5wd [4.256724637s] May 11 20:49:32.299: INFO: Created: latency-svc-jztp5 May 11 20:49:32.328: INFO: Got endpoints: latency-svc-jztp5 [4.546506888s] May 11 20:49:32.436: INFO: Created: latency-svc-4jddx May 11 20:49:32.442: INFO: Got endpoints: latency-svc-4jddx [4.534748415s] May 11 20:49:32.491: INFO: Created: latency-svc-7pb6r May 11 20:49:32.509: INFO: Got endpoints: latency-svc-7pb6r [4.146176681s] May 11 20:49:32.882: INFO: Created: latency-svc-4lg8q May 11 20:49:33.136: INFO: Got endpoints: latency-svc-4lg8q [4.185428102s] May 11 20:49:33.225: INFO: Created: latency-svc-2wm45 May 11 20:49:33.234: INFO: Got endpoints: latency-svc-2wm45 [4.221742882s] May 11 20:49:34.066: INFO: Created: latency-svc-rf5tl May 11 20:49:34.123: INFO: Created: latency-svc-lz2j5 May 11 20:49:34.123: INFO: Got endpoints: latency-svc-rf5tl [4.564106655s] May 11 20:49:34.268: INFO: Got endpoints: latency-svc-lz2j5 [4.662899412s] May 11 20:49:34.285: INFO: Created: latency-svc-9ltpd May 11 20:49:34.295: INFO: Got endpoints: latency-svc-9ltpd [4.001981148s] May 11 20:49:34.322: INFO: Created: latency-svc-v8p6s May 11 20:49:34.331: INFO: Got endpoints: latency-svc-v8p6s [3.910936465s] May 11 20:49:34.406: INFO: Created: latency-svc-vz9ml May 11 20:49:34.410: INFO: Got endpoints: latency-svc-vz9ml [3.808749814s] May 11 20:49:34.430: INFO: Created: latency-svc-hf7k6 May 11 20:49:34.435: INFO: Got endpoints: latency-svc-hf7k6 [3.693174341s] May 11 20:49:34.453: INFO: Created: latency-svc-2kwjs May 11 20:49:34.465: INFO: Got endpoints: latency-svc-2kwjs [3.501014846s] May 11 20:49:34.484: INFO: Created: latency-svc-92g62 May 11 20:49:34.502: INFO: Got endpoints: latency-svc-92g62 [3.265818289s] May 11 20:49:34.561: INFO: Created: latency-svc-kcxnb May 11 20:49:34.598: INFO: Got endpoints: latency-svc-kcxnb [3.011421584s] May 11 20:49:34.705: INFO: Created: latency-svc-k9cl4 May 11 20:49:34.712: INFO: Got endpoints: latency-svc-k9cl4 [2.928666185s] May 11 20:49:34.742: INFO: Created: latency-svc-ttfnr May 11 20:49:34.754: INFO: Got endpoints: latency-svc-ttfnr [2.426087384s] May 11 20:49:34.791: INFO: Created: latency-svc-lft9j May 11 20:49:34.849: INFO: Got endpoints: latency-svc-lft9j [2.407393564s] May 11 20:49:34.880: INFO: Created: latency-svc-7qlfj May 11 20:49:34.905: INFO: Got endpoints: latency-svc-7qlfj [2.395780497s] May 11 20:49:34.934: INFO: Created: latency-svc-w5prq May 11 20:49:34.999: INFO: Got endpoints: latency-svc-w5prq [1.862427136s] May 11 20:49:35.007: INFO: Created: latency-svc-dfx6g May 11 20:49:35.036: INFO: Got endpoints: latency-svc-dfx6g [1.802006801s] May 11 20:49:35.078: INFO: Created: latency-svc-26j6n May 11 20:49:35.098: INFO: Got endpoints: latency-svc-26j6n [974.902093ms] May 11 20:49:35.150: INFO: Created: latency-svc-ntjlf May 11 20:49:35.152: INFO: Got endpoints: latency-svc-ntjlf [883.601454ms] May 11 20:49:35.180: INFO: Created: latency-svc-jld9n May 11 20:49:35.201: INFO: Got endpoints: latency-svc-jld9n [906.447122ms] May 11 20:49:35.228: INFO: Created: latency-svc-mx8x6 May 11 20:49:35.243: INFO: Got endpoints: latency-svc-mx8x6 [911.517477ms] May 11 20:49:35.306: INFO: Created: latency-svc-7zmjj May 11 20:49:35.322: INFO: Got endpoints: latency-svc-7zmjj [911.77663ms] May 11 20:49:35.354: INFO: Created: latency-svc-h7rfx May 11 20:49:35.370: INFO: Got endpoints: latency-svc-h7rfx [935.678835ms] May 11 20:49:35.390: INFO: Created: latency-svc-dkfdc May 11 20:49:35.424: INFO: Got endpoints: latency-svc-dkfdc [959.253759ms] May 11 20:49:35.438: INFO: Created: latency-svc-q6jd5 May 11 20:49:35.450: INFO: Got endpoints: latency-svc-q6jd5 [948.67483ms] May 11 20:49:35.474: INFO: Created: latency-svc-c25mz May 11 20:49:35.491: INFO: Got endpoints: latency-svc-c25mz [893.080297ms] May 11 20:49:35.516: INFO: Created: latency-svc-4n7f6 May 11 20:49:35.562: INFO: Got endpoints: latency-svc-4n7f6 [849.993705ms] May 11 20:49:35.575: INFO: Created: latency-svc-kf2b5 May 11 20:49:35.594: INFO: Got endpoints: latency-svc-kf2b5 [840.068561ms] May 11 20:49:35.618: INFO: Created: latency-svc-tkn2c May 11 20:49:35.631: INFO: Got endpoints: latency-svc-tkn2c [781.91215ms] May 11 20:49:35.654: INFO: Created: latency-svc-sccsk May 11 20:49:35.693: INFO: Got endpoints: latency-svc-sccsk [788.136484ms] May 11 20:49:35.708: INFO: Created: latency-svc-pg5g9 May 11 20:49:35.727: INFO: Got endpoints: latency-svc-pg5g9 [728.101995ms] May 11 20:49:35.750: INFO: Created: latency-svc-nrmnq May 11 20:49:35.788: INFO: Got endpoints: latency-svc-nrmnq [751.950296ms] May 11 20:49:35.850: INFO: Created: latency-svc-8zvmw May 11 20:49:35.855: INFO: Got endpoints: latency-svc-8zvmw [756.771936ms] May 11 20:49:35.883: INFO: Created: latency-svc-gpzz7 May 11 20:49:35.924: INFO: Got endpoints: latency-svc-gpzz7 [771.972723ms] May 11 20:49:35.993: INFO: Created: latency-svc-cw94c May 11 20:49:35.998: INFO: Got endpoints: latency-svc-cw94c [796.905938ms] May 11 20:49:36.039: INFO: Created: latency-svc-n62jz May 11 20:49:36.054: INFO: Got endpoints: latency-svc-n62jz [811.307223ms] May 11 20:49:36.080: INFO: Created: latency-svc-stm9v May 11 20:49:36.151: INFO: Got endpoints: latency-svc-stm9v [828.812927ms] May 11 20:49:36.158: INFO: Created: latency-svc-hrdjj May 11 20:49:36.211: INFO: Got endpoints: latency-svc-hrdjj [840.337509ms] May 11 20:49:36.305: INFO: Created: latency-svc-gtf25 May 11 20:49:36.314: INFO: Got endpoints: latency-svc-gtf25 [889.569033ms] May 11 20:49:36.350: INFO: Created: latency-svc-htm74 May 11 20:49:36.368: INFO: Got endpoints: latency-svc-htm74 [918.010979ms] May 11 20:49:36.449: INFO: Created: latency-svc-l656q May 11 20:49:36.458: INFO: Got endpoints: latency-svc-l656q [966.863651ms] May 11 20:49:36.482: INFO: Created: latency-svc-wx242 May 11 20:49:36.507: INFO: Got endpoints: latency-svc-wx242 [944.577206ms] May 11 20:49:36.536: INFO: Created: latency-svc-5k4js May 11 20:49:36.579: INFO: Got endpoints: latency-svc-5k4js [985.118329ms] May 11 20:49:36.596: INFO: Created: latency-svc-s7vkm May 11 20:49:36.632: INFO: Got endpoints: latency-svc-s7vkm [1.000833774s] May 11 20:49:36.669: INFO: Created: latency-svc-29t4m May 11 20:49:36.730: INFO: Got endpoints: latency-svc-29t4m [1.036178815s] May 11 20:49:36.739: INFO: Created: latency-svc-64lwl May 11 20:49:36.755: INFO: Got endpoints: latency-svc-64lwl [1.027639707s] May 11 20:49:36.788: INFO: Created: latency-svc-hcrc7 May 11 20:49:36.803: INFO: Got endpoints: latency-svc-hcrc7 [1.015603425s] May 11 20:49:36.879: INFO: Created: latency-svc-ntd25 May 11 20:49:36.909: INFO: Got endpoints: latency-svc-ntd25 [1.054243908s] May 11 20:49:36.956: INFO: Created: latency-svc-ptw8k May 11 20:49:36.972: INFO: Got endpoints: latency-svc-ptw8k [1.047952178s] May 11 20:49:37.033: INFO: Created: latency-svc-p4g8p May 11 20:49:37.040: INFO: Got endpoints: latency-svc-p4g8p [1.041242136s] May 11 20:49:37.119: INFO: Created: latency-svc-tvhxr May 11 20:49:37.257: INFO: Got endpoints: latency-svc-tvhxr [1.202546045s] May 11 20:49:37.280: INFO: Created: latency-svc-xstxn May 11 20:49:37.304: INFO: Got endpoints: latency-svc-xstxn [1.153069305s] May 11 20:49:37.389: INFO: Created: latency-svc-dvxqr May 11 20:49:37.420: INFO: Got endpoints: latency-svc-dvxqr [1.208771338s] May 11 20:49:37.457: INFO: Created: latency-svc-p5f5m May 11 20:49:37.472: INFO: Got endpoints: latency-svc-p5f5m [1.157841879s] May 11 20:49:37.520: INFO: Created: latency-svc-w82vt May 11 20:49:37.526: INFO: Got endpoints: latency-svc-w82vt [1.158084558s] May 11 20:49:37.550: INFO: Created: latency-svc-b22h8 May 11 20:49:37.582: INFO: Got endpoints: latency-svc-b22h8 [1.123089256s] May 11 20:49:37.676: INFO: Created: latency-svc-zqdzr May 11 20:49:37.712: INFO: Got endpoints: latency-svc-zqdzr [1.205640985s] May 11 20:49:37.713: INFO: Created: latency-svc-wwrvv May 11 20:49:37.760: INFO: Got endpoints: latency-svc-wwrvv [1.180100347s] May 11 20:49:37.837: INFO: Created: latency-svc-fwmm4 May 11 20:49:37.893: INFO: Got endpoints: latency-svc-fwmm4 [1.260683913s] May 11 20:49:37.893: INFO: Created: latency-svc-khh6v May 11 20:49:37.935: INFO: Got endpoints: latency-svc-khh6v [1.204879184s] May 11 20:49:38.006: INFO: Created: latency-svc-lsttk May 11 20:49:38.024: INFO: Got endpoints: latency-svc-lsttk [1.269332365s] May 11 20:49:38.048: INFO: Created: latency-svc-zs9wn May 11 20:49:38.061: INFO: Got endpoints: latency-svc-zs9wn [1.258192386s] May 11 20:49:38.090: INFO: Created: latency-svc-vrfgm May 11 20:49:38.190: INFO: Got endpoints: latency-svc-vrfgm [1.280819383s] May 11 20:49:38.193: INFO: Created: latency-svc-9pl9z May 11 20:49:38.198: INFO: Got endpoints: latency-svc-9pl9z [1.226403179s] May 11 20:49:38.258: INFO: Created: latency-svc-blmk2 May 11 20:49:38.334: INFO: Got endpoints: latency-svc-blmk2 [1.294219257s] May 11 20:49:38.364: INFO: Created: latency-svc-6bhpv May 11 20:49:38.385: INFO: Got endpoints: latency-svc-6bhpv [1.128226435s] May 11 20:49:38.408: INFO: Created: latency-svc-fcdjd May 11 20:49:38.422: INFO: Got endpoints: latency-svc-fcdjd [1.118382059s] May 11 20:49:38.472: INFO: Created: latency-svc-6g5rw May 11 20:49:38.487: INFO: Got endpoints: latency-svc-6g5rw [1.067137516s] May 11 20:49:38.526: INFO: Created: latency-svc-4b8l6 May 11 20:49:38.542: INFO: Got endpoints: latency-svc-4b8l6 [1.070584965s] May 11 20:49:38.565: INFO: Created: latency-svc-fgv2n May 11 20:49:38.604: INFO: Got endpoints: latency-svc-fgv2n [1.077285742s] May 11 20:49:38.619: INFO: Created: latency-svc-fpkdk May 11 20:49:38.627: INFO: Got endpoints: latency-svc-fpkdk [1.045493935s] May 11 20:49:38.654: INFO: Created: latency-svc-hv69z May 11 20:49:38.664: INFO: Got endpoints: latency-svc-hv69z [951.287795ms] May 11 20:49:38.691: INFO: Created: latency-svc-d5bgs May 11 20:49:38.735: INFO: Got endpoints: latency-svc-d5bgs [975.603248ms] May 11 20:49:38.744: INFO: Created: latency-svc-bv99n May 11 20:49:38.761: INFO: Got endpoints: latency-svc-bv99n [868.014823ms] May 11 20:49:38.786: INFO: Created: latency-svc-s989z May 11 20:49:38.804: INFO: Got endpoints: latency-svc-s989z [869.700756ms] May 11 20:49:38.835: INFO: Created: latency-svc-5xfhk May 11 20:49:38.885: INFO: Got endpoints: latency-svc-5xfhk [861.178284ms] May 11 20:49:38.907: INFO: Created: latency-svc-w5xxs May 11 20:49:38.943: INFO: Got endpoints: latency-svc-w5xxs [881.385482ms] May 11 20:49:38.984: INFO: Created: latency-svc-s8x7m May 11 20:49:39.029: INFO: Got endpoints: latency-svc-s8x7m [838.692229ms] May 11 20:49:39.056: INFO: Created: latency-svc-htl4f May 11 20:49:39.080: INFO: Got endpoints: latency-svc-htl4f [881.820907ms] May 11 20:49:39.118: INFO: Created: latency-svc-fjts2 May 11 20:49:39.239: INFO: Got endpoints: latency-svc-fjts2 [905.189264ms] May 11 20:49:39.242: INFO: Created: latency-svc-npbg5 May 11 20:49:39.255: INFO: Got endpoints: latency-svc-npbg5 [869.481302ms] May 11 20:49:39.284: INFO: Created: latency-svc-w6qlv May 11 20:49:39.297: INFO: Got endpoints: latency-svc-w6qlv [875.08941ms] May 11 20:49:39.448: INFO: Created: latency-svc-z2wpw May 11 20:49:39.471: INFO: Got endpoints: latency-svc-z2wpw [984.527968ms] May 11 20:49:39.513: INFO: Created: latency-svc-tn8b5 May 11 20:49:39.596: INFO: Got endpoints: latency-svc-tn8b5 [1.054229741s] May 11 20:49:39.633: INFO: Created: latency-svc-5h848 May 11 20:49:39.646: INFO: Got endpoints: latency-svc-5h848 [1.042368275s] May 11 20:49:39.706: INFO: Created: latency-svc-l6hnl May 11 20:49:39.740: INFO: Got endpoints: latency-svc-l6hnl [1.113078118s] May 11 20:49:39.742: INFO: Created: latency-svc-s47cs May 11 20:49:39.759: INFO: Got endpoints: latency-svc-s47cs [1.095223211s] May 11 20:49:39.789: INFO: Created: latency-svc-ftzcf May 11 20:49:39.802: INFO: Got endpoints: latency-svc-ftzcf [1.066299452s] May 11 20:49:39.855: INFO: Created: latency-svc-p8hsl May 11 20:49:39.861: INFO: Got endpoints: latency-svc-p8hsl [1.100241991s] May 11 20:49:39.884: INFO: Created: latency-svc-2zrds May 11 20:49:39.898: INFO: Got endpoints: latency-svc-2zrds [1.093414653s] May 11 20:49:39.920: INFO: Created: latency-svc-7zck5 May 11 20:49:39.935: INFO: Got endpoints: latency-svc-7zck5 [1.04958009s] May 11 20:49:40.005: INFO: Created: latency-svc-njlrg May 11 20:49:40.008: INFO: Got endpoints: latency-svc-njlrg [110.557123ms] May 11 20:49:40.041: INFO: Created: latency-svc-kx2s4 May 11 20:49:40.088: INFO: Got endpoints: latency-svc-kx2s4 [1.145058509s] May 11 20:49:40.166: INFO: Created: latency-svc-2jlvl May 11 20:49:40.170: INFO: Got endpoints: latency-svc-2jlvl [1.140609088s] May 11 20:49:40.250: INFO: Created: latency-svc-k2dq4 May 11 20:49:40.266: INFO: Got endpoints: latency-svc-k2dq4 [1.185335043s] May 11 20:49:40.342: INFO: Created: latency-svc-48mvp May 11 20:49:40.378: INFO: Got endpoints: latency-svc-48mvp [1.138315521s] May 11 20:49:40.449: INFO: Created: latency-svc-bc29g May 11 20:49:40.473: INFO: Got endpoints: latency-svc-bc29g [1.217666863s] May 11 20:49:40.514: INFO: Created: latency-svc-9wrl5 May 11 20:49:40.531: INFO: Got endpoints: latency-svc-9wrl5 [1.233257094s] May 11 20:49:40.617: INFO: Created: latency-svc-dth6k May 11 20:49:40.641: INFO: Got endpoints: latency-svc-dth6k [1.16931902s] May 11 20:49:40.684: INFO: Created: latency-svc-6z88s May 11 20:49:40.699: INFO: Got endpoints: latency-svc-6z88s [1.102723468s] May 11 20:49:40.781: INFO: Created: latency-svc-l9j8x May 11 20:49:40.790: INFO: Got endpoints: latency-svc-l9j8x [1.144189077s] May 11 20:49:40.922: INFO: Created: latency-svc-bd4g9 May 11 20:49:40.925: INFO: Got endpoints: latency-svc-bd4g9 [1.185027424s] May 11 20:49:40.984: INFO: Created: latency-svc-vtf8t May 11 20:49:41.001: INFO: Got endpoints: latency-svc-vtf8t [1.241622829s] May 11 20:49:41.074: INFO: Created: latency-svc-v9sdx May 11 20:49:41.110: INFO: Got endpoints: latency-svc-v9sdx [1.307931585s] May 11 20:49:41.158: INFO: Created: latency-svc-tmb2d May 11 20:49:41.239: INFO: Got endpoints: latency-svc-tmb2d [1.378005755s] May 11 20:49:41.260: INFO: Created: latency-svc-zpczh May 11 20:49:41.308: INFO: Got endpoints: latency-svc-zpczh [1.372953698s] May 11 20:49:41.368: INFO: Created: latency-svc-fpsxl May 11 20:49:41.405: INFO: Got endpoints: latency-svc-fpsxl [1.396049565s] May 11 20:49:41.586: INFO: Created: latency-svc-j9cj2 May 11 20:49:41.650: INFO: Got endpoints: latency-svc-j9cj2 [1.561715199s] May 11 20:49:41.650: INFO: Created: latency-svc-mqnmt May 11 20:49:41.801: INFO: Got endpoints: latency-svc-mqnmt [1.631406415s] May 11 20:49:41.806: INFO: Created: latency-svc-w4dmt May 11 20:49:41.855: INFO: Got endpoints: latency-svc-w4dmt [1.589112751s] May 11 20:49:41.957: INFO: Created: latency-svc-g2ndg May 11 20:49:41.993: INFO: Got endpoints: latency-svc-g2ndg [1.614892452s] May 11 20:49:42.029: INFO: Created: latency-svc-mhsmp May 11 20:49:42.083: INFO: Got endpoints: latency-svc-mhsmp [1.610416106s] May 11 20:49:42.101: INFO: Created: latency-svc-zmn9t May 11 20:49:42.133: INFO: Got endpoints: latency-svc-zmn9t [1.60244503s] May 11 20:49:43.222: INFO: Created: latency-svc-vh8jn May 11 20:49:43.224: INFO: Got endpoints: latency-svc-vh8jn [2.582711599s] May 11 20:49:43.882: INFO: Created: latency-svc-bfs2z May 11 20:49:44.197: INFO: Got endpoints: latency-svc-bfs2z [3.497558859s] May 11 20:49:44.718: INFO: Created: latency-svc-mmw8z May 11 20:49:45.113: INFO: Got endpoints: latency-svc-mmw8z [4.322541255s] May 11 20:49:45.442: INFO: Created: latency-svc-dqb6f May 11 20:49:45.749: INFO: Got endpoints: latency-svc-dqb6f [4.823155608s] May 11 20:49:46.025: INFO: Created: latency-svc-svwcq May 11 20:49:46.029: INFO: Got endpoints: latency-svc-svwcq [5.02535906s] May 11 20:49:46.518: INFO: Created: latency-svc-sktz8 May 11 20:49:46.795: INFO: Got endpoints: latency-svc-sktz8 [5.685617972s] May 11 20:49:46.834: INFO: Created: latency-svc-2r4tq May 11 20:49:46.861: INFO: Got endpoints: latency-svc-2r4tq [5.6219358s] May 11 20:49:47.640: INFO: Created: latency-svc-49l2g May 11 20:49:47.795: INFO: Got endpoints: latency-svc-49l2g [6.487027072s] May 11 20:49:47.799: INFO: Created: latency-svc-ljdrj May 11 20:49:48.138: INFO: Got endpoints: latency-svc-ljdrj [6.733499434s] May 11 20:49:48.198: INFO: Created: latency-svc-2vnml May 11 20:49:48.403: INFO: Got endpoints: latency-svc-2vnml [6.752920259s] May 11 20:49:48.565: INFO: Created: latency-svc-mkpgz May 11 20:49:48.588: INFO: Got endpoints: latency-svc-mkpgz [6.786703445s] May 11 20:49:48.631: INFO: Created: latency-svc-bvj2s May 11 20:49:48.682: INFO: Got endpoints: latency-svc-bvj2s [6.826607945s] May 11 20:49:48.910: INFO: Created: latency-svc-scc6z May 11 20:49:48.968: INFO: Got endpoints: latency-svc-scc6z [6.975103761s] May 11 20:49:49.059: INFO: Created: latency-svc-mwqf6 May 11 20:49:49.123: INFO: Got endpoints: latency-svc-mwqf6 [7.039900019s] May 11 20:49:49.124: INFO: Created: latency-svc-7dgzn May 11 20:49:49.202: INFO: Got endpoints: latency-svc-7dgzn [7.069082787s] May 11 20:49:49.255: INFO: Created: latency-svc-22pxw May 11 20:49:49.273: INFO: Got endpoints: latency-svc-22pxw [6.049924729s] May 11 20:49:49.383: INFO: Created: latency-svc-s4lnf May 11 20:49:49.393: INFO: Got endpoints: latency-svc-s4lnf [5.196240658s] May 11 20:49:49.478: INFO: Created: latency-svc-hzsk4 May 11 20:49:49.615: INFO: Got endpoints: latency-svc-hzsk4 [4.502417506s] May 11 20:49:49.634: INFO: Created: latency-svc-4wgt4 May 11 20:49:49.664: INFO: Got endpoints: latency-svc-4wgt4 [3.915470252s] May 11 20:49:49.702: INFO: Created: latency-svc-wdkll May 11 20:49:49.783: INFO: Got endpoints: latency-svc-wdkll [3.754593895s] May 11 20:49:49.803: INFO: Created: latency-svc-rn8j7 May 11 20:49:49.821: INFO: Got endpoints: latency-svc-rn8j7 [3.025652252s] May 11 20:49:49.861: INFO: Created: latency-svc-2mxtr May 11 20:49:49.880: INFO: Got endpoints: latency-svc-2mxtr [3.018665804s] May 11 20:49:49.945: INFO: Created: latency-svc-2s5nx May 11 20:49:49.971: INFO: Got endpoints: latency-svc-2s5nx [2.175463624s] May 11 20:49:49.995: INFO: Created: latency-svc-2pm6l May 11 20:49:50.007: INFO: Got endpoints: latency-svc-2pm6l [1.868714173s] May 11 20:49:50.065: INFO: Created: latency-svc-w499j May 11 20:49:50.090: INFO: Got endpoints: latency-svc-w499j [1.687361273s] May 11 20:49:50.125: INFO: Created: latency-svc-dvt4b May 11 20:49:50.140: INFO: Got endpoints: latency-svc-dvt4b [1.551947877s] May 11 20:49:50.251: INFO: Created: latency-svc-9hw4b May 11 20:49:50.295: INFO: Got endpoints: latency-svc-9hw4b [1.613005091s] May 11 20:49:50.325: INFO: Created: latency-svc-t7d86 May 11 20:49:50.332: INFO: Got endpoints: latency-svc-t7d86 [1.364268344s] May 11 20:49:50.394: INFO: Created: latency-svc-8vnhk May 11 20:49:50.425: INFO: Got endpoints: latency-svc-8vnhk [1.302399818s] May 11 20:49:50.426: INFO: Created: latency-svc-kh7js May 11 20:49:50.474: INFO: Got endpoints: latency-svc-kh7js [1.271828455s] May 11 20:49:50.580: INFO: Created: latency-svc-7z4sv May 11 20:49:50.592: INFO: Got endpoints: latency-svc-7z4sv [1.318086534s] May 11 20:49:50.654: INFO: Created: latency-svc-6vz4d May 11 20:49:50.753: INFO: Got endpoints: latency-svc-6vz4d [1.360215501s] May 11 20:49:50.823: INFO: Created: latency-svc-vblmh May 11 20:49:50.898: INFO: Got endpoints: latency-svc-vblmh [1.282029194s] May 11 20:49:50.924: INFO: Created: latency-svc-tm8wn May 11 20:49:50.941: INFO: Got endpoints: latency-svc-tm8wn [1.276715535s] May 11 20:49:50.965: INFO: Created: latency-svc-5b87w May 11 20:49:50.983: INFO: Got endpoints: latency-svc-5b87w [1.199033334s] May 11 20:49:51.035: INFO: Created: latency-svc-w5994 May 11 20:49:51.077: INFO: Got endpoints: latency-svc-w5994 [1.256168307s] May 11 20:49:51.122: INFO: Created: latency-svc-78j25 May 11 20:49:51.179: INFO: Got endpoints: latency-svc-78j25 [1.299176544s] May 11 20:49:51.180: INFO: Created: latency-svc-qs8rt May 11 20:49:51.206: INFO: Got endpoints: latency-svc-qs8rt [1.234886266s] May 11 20:49:51.229: INFO: Created: latency-svc-lwkkk May 11 20:49:51.249: INFO: Got endpoints: latency-svc-lwkkk [1.242125629s] May 11 20:49:51.272: INFO: Created: latency-svc-6xc6z May 11 20:49:51.328: INFO: Got endpoints: latency-svc-6xc6z [1.237866398s] May 11 20:49:51.343: INFO: Created: latency-svc-dbmlh May 11 20:49:51.358: INFO: Got endpoints: latency-svc-dbmlh [1.217618656s] May 11 20:49:51.386: INFO: Created: latency-svc-rrd5b May 11 20:49:51.400: INFO: Got endpoints: latency-svc-rrd5b [1.105030659s] May 11 20:49:51.421: INFO: Created: latency-svc-qrzqj May 11 20:49:51.472: INFO: Got endpoints: latency-svc-qrzqj [1.139970505s] May 11 20:49:51.495: INFO: Created: latency-svc-f45wt May 11 20:49:51.510: INFO: Got endpoints: latency-svc-f45wt [1.084372866s] May 11 20:49:51.549: INFO: Created: latency-svc-4zth9 May 11 20:49:51.634: INFO: Got endpoints: latency-svc-4zth9 [1.159186047s] May 11 20:49:51.634: INFO: Latencies: [110.557123ms 149.551731ms 252.353011ms 304.050404ms 457.596286ms 489.265213ms 587.30984ms 665.705465ms 728.101995ms 744.967449ms 751.950296ms 756.771936ms 771.972723ms 781.91215ms 788.136484ms 796.905938ms 811.307223ms 828.812927ms 838.692229ms 840.068561ms 840.337509ms 844.127247ms 849.993705ms 861.178284ms 868.014823ms 869.481302ms 869.700756ms 875.08941ms 877.232321ms 881.385482ms 881.820907ms 883.601454ms 889.569033ms 893.080297ms 905.189264ms 906.447122ms 911.517477ms 911.77663ms 918.010979ms 919.311595ms 935.678835ms 944.577206ms 948.67483ms 951.287795ms 959.253759ms 966.863651ms 974.902093ms 975.603248ms 984.527968ms 985.118329ms 987.993692ms 1.000833774s 1.011099929s 1.015603425s 1.027639707s 1.036178815s 1.041242136s 1.042368275s 1.045493935s 1.047952178s 1.04958009s 1.054229741s 1.054243908s 1.066299452s 1.067137516s 1.070584965s 1.077285742s 1.084372866s 1.093414653s 1.095223211s 1.100241991s 1.102723468s 1.105030659s 1.113078118s 1.118382059s 1.123089256s 1.128226435s 1.138315521s 1.139970505s 1.140609088s 1.144189077s 1.145058509s 1.153069305s 1.154469862s 1.157841879s 1.158084558s 1.159186047s 1.16931902s 1.179669603s 1.180100347s 1.185027424s 1.185335043s 1.196438154s 1.199033334s 1.202546045s 1.204879184s 1.205640985s 1.208771338s 1.217618656s 1.217666863s 1.226403179s 1.233257094s 1.234886266s 1.237866398s 1.241622829s 1.242125629s 1.244507137s 1.256168307s 1.258192386s 1.260683913s 1.269332365s 1.271828455s 1.276715535s 1.280819383s 1.282029194s 1.291060476s 1.294219257s 1.299176544s 1.302399818s 1.307931585s 1.318086534s 1.360215501s 1.364268344s 1.372953698s 1.378005755s 1.38602499s 1.396049565s 1.396199214s 1.508025969s 1.551947877s 1.561715199s 1.589112751s 1.60244503s 1.610416106s 1.613005091s 1.614892452s 1.631406415s 1.687361273s 1.802006801s 1.862427136s 1.868714173s 2.175463624s 2.394778416s 2.395780497s 2.407393564s 2.426087384s 2.582711599s 2.609803937s 2.928666185s 3.011421584s 3.018665804s 3.025652252s 3.035752137s 3.257424019s 3.265818289s 3.341053382s 3.497558859s 3.501014846s 3.693174341s 3.7283076s 3.754593895s 3.808749814s 3.910936465s 3.915470252s 4.001981148s 4.146176681s 4.185428102s 4.210409426s 4.221742882s 4.256724637s 4.29308308s 4.322541255s 4.502417506s 4.528255175s 4.534748415s 4.546506888s 4.564106655s 4.585655622s 4.629658294s 4.662899412s 4.715837608s 4.823155608s 5.02535906s 5.073407006s 5.149453543s 5.196240658s 5.205947994s 5.219580812s 5.221427783s 5.6219358s 5.685617972s 6.049924729s 6.487027072s 6.733499434s 6.752920259s 6.786703445s 6.826607945s 6.975103761s 7.039900019s 7.069082787s] May 11 20:49:51.634: INFO: 50 %ile: 1.226403179s May 11 20:49:51.634: INFO: 90 %ile: 4.715837608s May 11 20:49:51.634: INFO: 99 %ile: 7.039900019s May 11 20:49:51.634: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:49:51.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3156" for this suite. • [SLOW TEST:34.482 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":275,"completed":26,"skipped":500,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:49:51.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 20:49:52.257: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 20:49:54.269: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826992, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826992, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826992, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826992, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 20:49:56.274: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826992, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826992, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826992, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826992, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 20:49:58.848: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826992, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826992, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826992, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826992, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 20:50:01.686: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 May 11 20:50:02.686: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 May 11 20:50:03.686: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 May 11 20:50:04.686: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 May 11 20:50:05.686: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 May 11 20:50:06.686: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 11 20:50:06.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8076-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:50:08.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7140" for this suite. STEP: Destroying namespace "webhook-7140-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.756 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":27,"skipped":509,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:50:10.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-5897 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-5897 May 11 20:50:11.394: INFO: Found 0 stateful pods, waiting for 1 May 11 20:50:21.814: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 May 11 20:50:22.140: INFO: Deleting all statefulset in ns statefulset-5897 May 11 20:50:22.226: INFO: Scaling statefulset ss to 0 May 11 20:50:42.525: INFO: Waiting for statefulset status.replicas updated to 0 May 11 20:50:42.569: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:50:42.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5897" for this suite. • [SLOW TEST:32.555 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":28,"skipped":523,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:50:42.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-86288dba-695f-43ee-944d-08f5fbd9b274 in namespace container-probe-646 May 11 20:50:51.823: INFO: Started pod liveness-86288dba-695f-43ee-944d-08f5fbd9b274 in namespace container-probe-646 STEP: checking the pod's current state and verifying that restartCount is present May 11 20:50:51.855: INFO: Initial restart count of pod liveness-86288dba-695f-43ee-944d-08f5fbd9b274 is 0 May 11 20:51:14.988: INFO: Restart count of pod container-probe-646/liveness-86288dba-695f-43ee-944d-08f5fbd9b274 is now 1 (23.133034461s elapsed) May 11 20:51:33.323: INFO: Restart count of pod container-probe-646/liveness-86288dba-695f-43ee-944d-08f5fbd9b274 is now 2 (41.467291665s elapsed) May 11 20:51:55.800: INFO: Restart count of pod container-probe-646/liveness-86288dba-695f-43ee-944d-08f5fbd9b274 is now 3 (1m3.944730795s elapsed) May 11 20:52:14.304: INFO: Restart count of pod container-probe-646/liveness-86288dba-695f-43ee-944d-08f5fbd9b274 is now 4 (1m22.448729234s elapsed) May 11 20:53:26.238: INFO: Restart count of pod container-probe-646/liveness-86288dba-695f-43ee-944d-08f5fbd9b274 is now 5 (2m34.382329885s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:53:26.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-646" for this suite. • [SLOW TEST:163.883 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":29,"skipped":560,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:53:26.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:53:27.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4044" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":30,"skipped":602,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:53:27.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-edfe696a-2a26-4ba2-86f1-90a303f3fbb4 STEP: Creating a pod to test consume secrets May 11 20:53:27.871: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4b651633-07a4-4012-a675-e25895cd81ae" in namespace "projected-3721" to be "Succeeded or Failed" May 11 20:53:27.902: INFO: Pod "pod-projected-secrets-4b651633-07a4-4012-a675-e25895cd81ae": Phase="Pending", Reason="", readiness=false. Elapsed: 31.255881ms May 11 20:53:29.947: INFO: Pod "pod-projected-secrets-4b651633-07a4-4012-a675-e25895cd81ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075791286s May 11 20:53:31.971: INFO: Pod "pod-projected-secrets-4b651633-07a4-4012-a675-e25895cd81ae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099532673s May 11 20:53:33.974: INFO: Pod "pod-projected-secrets-4b651633-07a4-4012-a675-e25895cd81ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.103053976s STEP: Saw pod success May 11 20:53:33.974: INFO: Pod "pod-projected-secrets-4b651633-07a4-4012-a675-e25895cd81ae" satisfied condition "Succeeded or Failed" May 11 20:53:33.976: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-4b651633-07a4-4012-a675-e25895cd81ae container projected-secret-volume-test: STEP: delete the pod May 11 20:53:34.023: INFO: Waiting for pod pod-projected-secrets-4b651633-07a4-4012-a675-e25895cd81ae to disappear May 11 20:53:34.038: INFO: Pod pod-projected-secrets-4b651633-07a4-4012-a675-e25895cd81ae no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:53:34.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3721" for this suite. • [SLOW TEST:6.504 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":31,"skipped":620,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:53:34.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 11 20:53:34.124: INFO: Waiting up to 5m0s for pod "downwardapi-volume-12b4dfd4-6819-4a34-b916-896d7ce75e90" in namespace "projected-2714" to be "Succeeded or Failed" May 11 20:53:34.137: INFO: Pod "downwardapi-volume-12b4dfd4-6819-4a34-b916-896d7ce75e90": Phase="Pending", Reason="", readiness=false. Elapsed: 13.178307ms May 11 20:53:36.142: INFO: Pod "downwardapi-volume-12b4dfd4-6819-4a34-b916-896d7ce75e90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017659502s May 11 20:53:38.147: INFO: Pod "downwardapi-volume-12b4dfd4-6819-4a34-b916-896d7ce75e90": Phase="Running", Reason="", readiness=true. Elapsed: 4.022552878s May 11 20:53:40.150: INFO: Pod "downwardapi-volume-12b4dfd4-6819-4a34-b916-896d7ce75e90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025994763s STEP: Saw pod success May 11 20:53:40.150: INFO: Pod "downwardapi-volume-12b4dfd4-6819-4a34-b916-896d7ce75e90" satisfied condition "Succeeded or Failed" May 11 20:53:40.152: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-12b4dfd4-6819-4a34-b916-896d7ce75e90 container client-container: STEP: delete the pod May 11 20:53:40.260: INFO: Waiting for pod downwardapi-volume-12b4dfd4-6819-4a34-b916-896d7ce75e90 to disappear May 11 20:53:40.303: INFO: Pod downwardapi-volume-12b4dfd4-6819-4a34-b916-896d7ce75e90 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:53:40.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2714" for this suite. • [SLOW TEST:6.263 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":32,"skipped":631,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:53:40.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-179be699-a6f6-4801-bf52-80b6f21993bc STEP: Creating a pod to test consume configMaps May 11 20:53:40.548: INFO: Waiting up to 5m0s for pod "pod-configmaps-62ad9ad3-861f-43a3-93d9-8a20056aa3ef" in namespace "configmap-1740" to be "Succeeded or Failed" May 11 20:53:40.555: INFO: Pod "pod-configmaps-62ad9ad3-861f-43a3-93d9-8a20056aa3ef": Phase="Pending", Reason="", readiness=false. Elapsed: 7.310228ms May 11 20:53:42.666: INFO: Pod "pod-configmaps-62ad9ad3-861f-43a3-93d9-8a20056aa3ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118022082s May 11 20:53:44.668: INFO: Pod "pod-configmaps-62ad9ad3-861f-43a3-93d9-8a20056aa3ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120599688s May 11 20:53:46.707: INFO: Pod "pod-configmaps-62ad9ad3-861f-43a3-93d9-8a20056aa3ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.159486098s STEP: Saw pod success May 11 20:53:46.707: INFO: Pod "pod-configmaps-62ad9ad3-861f-43a3-93d9-8a20056aa3ef" satisfied condition "Succeeded or Failed" May 11 20:53:46.772: INFO: Trying to get logs from node kali-worker pod pod-configmaps-62ad9ad3-861f-43a3-93d9-8a20056aa3ef container configmap-volume-test: STEP: delete the pod May 11 20:53:47.271: INFO: Waiting for pod pod-configmaps-62ad9ad3-861f-43a3-93d9-8a20056aa3ef to disappear May 11 20:53:47.388: INFO: Pod pod-configmaps-62ad9ad3-861f-43a3-93d9-8a20056aa3ef no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:53:47.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1740" for this suite. • [SLOW TEST:7.242 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":33,"skipped":632,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:53:47.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 May 11 20:53:48.575: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 11 20:53:48.638: INFO: Waiting for terminating namespaces to be deleted... May 11 20:53:48.641: INFO: Logging pods the kubelet thinks is on node kali-worker before test May 11 20:53:48.647: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 11 20:53:48.647: INFO: Container kindnet-cni ready: true, restart count 1 May 11 20:53:48.647: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 11 20:53:48.647: INFO: Container kube-proxy ready: true, restart count 0 May 11 20:53:48.647: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test May 11 20:53:48.652: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 11 20:53:48.652: INFO: Container kube-proxy ready: true, restart count 0 May 11 20:53:48.652: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 11 20:53:48.652: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-e02c318a-8efb-436d-934f-2beb4d1f62de 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-e02c318a-8efb-436d-934f-2beb4d1f62de off the node kali-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-e02c318a-8efb-436d-934f-2beb4d1f62de [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:53:57.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4655" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:9.779 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":275,"completed":34,"skipped":651,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:53:57.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-nkx7 STEP: Creating a pod to test atomic-volume-subpath May 11 20:53:57.876: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-nkx7" in namespace "subpath-9844" to be "Succeeded or Failed" May 11 20:53:57.962: INFO: Pod "pod-subpath-test-configmap-nkx7": Phase="Pending", Reason="", readiness=false. Elapsed: 86.203082ms May 11 20:53:59.988: INFO: Pod "pod-subpath-test-configmap-nkx7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112578171s May 11 20:54:02.091: INFO: Pod "pod-subpath-test-configmap-nkx7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.214833356s May 11 20:54:04.094: INFO: Pod "pod-subpath-test-configmap-nkx7": Phase="Running", Reason="", readiness=true. Elapsed: 6.21811633s May 11 20:54:06.238: INFO: Pod "pod-subpath-test-configmap-nkx7": Phase="Running", Reason="", readiness=true. Elapsed: 8.362649792s May 11 20:54:08.283: INFO: Pod "pod-subpath-test-configmap-nkx7": Phase="Running", Reason="", readiness=true. Elapsed: 10.407494535s May 11 20:54:10.288: INFO: Pod "pod-subpath-test-configmap-nkx7": Phase="Running", Reason="", readiness=true. Elapsed: 12.412144659s May 11 20:54:12.366: INFO: Pod "pod-subpath-test-configmap-nkx7": Phase="Running", Reason="", readiness=true. Elapsed: 14.490755527s May 11 20:54:14.370: INFO: Pod "pod-subpath-test-configmap-nkx7": Phase="Running", Reason="", readiness=true. Elapsed: 16.494596422s May 11 20:54:16.374: INFO: Pod "pod-subpath-test-configmap-nkx7": Phase="Running", Reason="", readiness=true. Elapsed: 18.498203192s May 11 20:54:18.378: INFO: Pod "pod-subpath-test-configmap-nkx7": Phase="Running", Reason="", readiness=true. Elapsed: 20.50248597s May 11 20:54:20.383: INFO: Pod "pod-subpath-test-configmap-nkx7": Phase="Running", Reason="", readiness=true. Elapsed: 22.506829623s May 11 20:54:22.389: INFO: Pod "pod-subpath-test-configmap-nkx7": Phase="Running", Reason="", readiness=true. Elapsed: 24.513686891s May 11 20:54:24.394: INFO: Pod "pod-subpath-test-configmap-nkx7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.518580697s STEP: Saw pod success May 11 20:54:24.394: INFO: Pod "pod-subpath-test-configmap-nkx7" satisfied condition "Succeeded or Failed" May 11 20:54:24.398: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-configmap-nkx7 container test-container-subpath-configmap-nkx7: STEP: delete the pod May 11 20:54:24.431: INFO: Waiting for pod pod-subpath-test-configmap-nkx7 to disappear May 11 20:54:24.441: INFO: Pod pod-subpath-test-configmap-nkx7 no longer exists STEP: Deleting pod pod-subpath-test-configmap-nkx7 May 11 20:54:24.441: INFO: Deleting pod "pod-subpath-test-configmap-nkx7" in namespace "subpath-9844" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:54:24.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9844" for this suite. • [SLOW TEST:27.116 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":35,"skipped":661,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:54:24.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 11 20:54:24.923: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2b80a55e-f365-4bb5-8c20-786c54b35d73" in namespace "downward-api-9675" to be "Succeeded or Failed" May 11 20:54:24.926: INFO: Pod "downwardapi-volume-2b80a55e-f365-4bb5-8c20-786c54b35d73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.748365ms May 11 20:54:27.020: INFO: Pod "downwardapi-volume-2b80a55e-f365-4bb5-8c20-786c54b35d73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096948586s May 11 20:54:29.024: INFO: Pod "downwardapi-volume-2b80a55e-f365-4bb5-8c20-786c54b35d73": Phase="Running", Reason="", readiness=true. Elapsed: 4.100783314s May 11 20:54:31.072: INFO: Pod "downwardapi-volume-2b80a55e-f365-4bb5-8c20-786c54b35d73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.14946178s STEP: Saw pod success May 11 20:54:31.072: INFO: Pod "downwardapi-volume-2b80a55e-f365-4bb5-8c20-786c54b35d73" satisfied condition "Succeeded or Failed" May 11 20:54:31.193: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-2b80a55e-f365-4bb5-8c20-786c54b35d73 container client-container: STEP: delete the pod May 11 20:54:31.320: INFO: Waiting for pod downwardapi-volume-2b80a55e-f365-4bb5-8c20-786c54b35d73 to disappear May 11 20:54:31.329: INFO: Pod downwardapi-volume-2b80a55e-f365-4bb5-8c20-786c54b35d73 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:54:31.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9675" for this suite. • [SLOW TEST:6.886 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":36,"skipped":666,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:54:31.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars May 11 20:54:31.622: INFO: Waiting up to 5m0s for pod "downward-api-62568f5d-8dc8-4624-b27a-f8b81ac19c86" in namespace "downward-api-1395" to be "Succeeded or Failed" May 11 20:54:31.679: INFO: Pod "downward-api-62568f5d-8dc8-4624-b27a-f8b81ac19c86": Phase="Pending", Reason="", readiness=false. Elapsed: 57.025989ms May 11 20:54:33.816: INFO: Pod "downward-api-62568f5d-8dc8-4624-b27a-f8b81ac19c86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193890847s May 11 20:54:35.820: INFO: Pod "downward-api-62568f5d-8dc8-4624-b27a-f8b81ac19c86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.198163596s STEP: Saw pod success May 11 20:54:35.820: INFO: Pod "downward-api-62568f5d-8dc8-4624-b27a-f8b81ac19c86" satisfied condition "Succeeded or Failed" May 11 20:54:35.823: INFO: Trying to get logs from node kali-worker pod downward-api-62568f5d-8dc8-4624-b27a-f8b81ac19c86 container dapi-container: STEP: delete the pod May 11 20:54:35.967: INFO: Waiting for pod downward-api-62568f5d-8dc8-4624-b27a-f8b81ac19c86 to disappear May 11 20:54:36.012: INFO: Pod downward-api-62568f5d-8dc8-4624-b27a-f8b81ac19c86 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 11 20:54:36.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1395" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":37,"skipped":670,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 11 20:54:36.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 11 20:54:36.194: INFO: (0) /api/v1/nodes/kali-worker2/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 20:54:36.414: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
May 11 20:54:36.480: INFO: Pod name sample-pod: Found 0 pods out of 1
May 11 20:54:41.558: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
May 11 20:54:41.558: INFO: Creating deployment "test-rolling-update-deployment"
May 11 20:54:41.630: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
May 11 20:54:41.702: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
May 11 20:54:43.975: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
May 11 20:54:44.055: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827281, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827281, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827281, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827281, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 20:54:46.127: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827281, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827281, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827281, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827281, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 20:54:48.059: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
May 11 20:54:48.069: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-9305 /apis/apps/v1/namespaces/deployment-9305/deployments/test-rolling-update-deployment c06d277a-5ab2-41af-8094-c1cb92c39a52 3512218 1 2020-05-11 20:54:41 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  [{e2e.test Update apps/v1 2020-05-11 20:54:41 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-11 20:54:47 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0021b86c8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-11 20:54:41 +0000 UTC,LastTransitionTime:2020-05-11 20:54:41 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-59d5cb45c7" has successfully progressed.,LastUpdateTime:2020-05-11 20:54:47 +0000 UTC,LastTransitionTime:2020-05-11 20:54:41 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

May 11 20:54:48.073: INFO: New ReplicaSet "test-rolling-update-deployment-59d5cb45c7" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7  deployment-9305 /apis/apps/v1/namespaces/deployment-9305/replicasets/test-rolling-update-deployment-59d5cb45c7 a4f99d9b-9d1e-473a-b7e0-adc736bb5e00 3512204 1 2020-05-11 20:54:41 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment c06d277a-5ab2-41af-8094-c1cb92c39a52 0xc0021b9037 0xc0021b9038}] []  [{kube-controller-manager Update apps/v1 2020-05-11 20:54:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 48 54 100 50 55 55 97 45 53 97 98 50 45 52 49 97 102 45 56 48 57 52 45 99 49 99 98 57 50 99 51 57 97 53 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 59d5cb45c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0021b90c8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
May 11 20:54:48.073: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
May 11 20:54:48.074: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-9305 /apis/apps/v1/namespaces/deployment-9305/replicasets/test-rolling-update-controller d40505dc-36b3-41d0-943c-2786cffa4c73 3512215 2 2020-05-11 20:54:36 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment c06d277a-5ab2-41af-8094-c1cb92c39a52 0xc0021b8f27 0xc0021b8f28}] []  [{e2e.test Update apps/v1 2020-05-11 20:54:36 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-11 20:54:47 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 48 54 100 50 55 55 97 45 53 97 98 50 45 52 49 97 102 45 56 48 57 52 45 99 49 99 98 57 50 99 51 57 97 53 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0021b8fc8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May 11 20:54:48.097: INFO: Pod "test-rolling-update-deployment-59d5cb45c7-mlzrw" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7-mlzrw test-rolling-update-deployment-59d5cb45c7- deployment-9305 /api/v1/namespaces/deployment-9305/pods/test-rolling-update-deployment-59d5cb45c7-mlzrw 48a75c7a-d9d3-4c98-9737-aa1a2284c986 3512203 0 2020-05-11 20:54:41 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-59d5cb45c7 a4f99d9b-9d1e-473a-b7e0-adc736bb5e00 0xc0021b9597 0xc0021b9598}] []  [{kube-controller-manager Update v1 2020-05-11 20:54:41 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 52 102 57 57 100 57 98 45 57 100 49 101 45 52 55 51 97 45 98 55 101 48 45 97 100 99 55 51 54 98 98 53 101 48 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 20:54:46 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zwm96,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zwm96,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zwm96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:54:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:54:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:54:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:54:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.18,StartTime:2020-05-11 20:54:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 20:54:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://3e6c64678c5a9688e4dacf0ee979a9f85ec6a1aec3505086818cb55d8204e998,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.18,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 20:54:48.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9305" for this suite.

• [SLOW TEST:11.806 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":39,"skipped":730,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 20:54:48.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 20:54:48.254: INFO: Creating deployment "webserver-deployment"
May 11 20:54:48.257: INFO: Waiting for observed generation 1
May 11 20:54:50.292: INFO: Waiting for all required pods to come up
May 11 20:54:50.298: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
May 11 20:55:08.522: INFO: Waiting for deployment "webserver-deployment" to complete
May 11 20:55:08.780: INFO: Updating deployment "webserver-deployment" with a non-existent image
May 11 20:55:08.996: INFO: Updating deployment webserver-deployment
May 11 20:55:08.996: INFO: Waiting for observed generation 2
May 11 20:55:12.181: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
May 11 20:55:13.055: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
May 11 20:55:13.636: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
May 11 20:55:13.876: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
May 11 20:55:13.876: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
May 11 20:55:13.904: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
May 11 20:55:13.937: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
May 11 20:55:13.937: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
May 11 20:55:13.944: INFO: Updating deployment webserver-deployment
May 11 20:55:13.944: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
May 11 20:55:14.553: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
May 11 20:55:14.556: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
May 11 20:55:19.047: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-3877 /apis/apps/v1/namespaces/deployment-3877/deployments/webserver-deployment 2178b370-e203-4cd0-b7fc-3b5723766971 3512545 3 2020-05-11 20:54:48 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-05-11 20:55:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-11 20:55:15 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002243f98  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-11 20:55:14 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-05-11 20:55:15 +0000 UTC,LastTransitionTime:2020-05-11 20:54:48 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

May 11 20:55:19.816: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4  deployment-3877 /apis/apps/v1/namespaces/deployment-3877/replicasets/webserver-deployment-6676bcd6d4 e96a5c93-dcd0-4f1e-b2a0-c90365bdc19d 3512543 3 2020-05-11 20:55:08 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 2178b370-e203-4cd0-b7fc-3b5723766971 0xc003056427 0xc003056428}] []  [{kube-controller-manager Update apps/v1 2020-05-11 20:55:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 49 55 56 98 51 55 48 45 101 50 48 51 45 52 99 100 48 45 98 55 102 99 45 51 98 53 55 50 51 55 54 54 57 55 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0030564a8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May 11 20:55:19.816: INFO: All old ReplicaSets of Deployment "webserver-deployment":
May 11 20:55:19.816: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797  deployment-3877 /apis/apps/v1/namespaces/deployment-3877/replicasets/webserver-deployment-84855cf797 50f093e1-149d-46c9-946b-e60fe9a9ec80 3512540 3 2020-05-11 20:54:48 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 2178b370-e203-4cd0-b7fc-3b5723766971 0xc003056507 0xc003056508}] []  [{kube-controller-manager Update apps/v1 2020-05-11 20:55:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 49 55 56 98 51 55 48 45 101 50 48 51 45 52 99 100 48 45 98 55 102 99 45 51 98 53 55 50 51 55 54 54 57 55 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003056578  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
May 11 20:55:20.306: INFO: Pod "webserver-deployment-6676bcd6d4-8f8mn" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-8f8mn webserver-deployment-6676bcd6d4- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-6676bcd6d4-8f8mn 707c73b6-a0f1-48ca-8551-02f225247f9c 3512438 0 2020-05-11 20:55:09 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e96a5c93-dcd0-4f1e-b2a0-c90365bdc19d 0xc0023e7587 0xc0023e7588}] []  [{kube-controller-manager Update v1 2020-05-11 20:55:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 57 54 97 53 99 57 51 45 100 99 100 48 45 52 102 49 101 45 98 50 97 48 45 99 57 48 51 54 53 98 100 99 49 57 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 20:55:09 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-11 20:55:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.307: INFO: Pod "webserver-deployment-6676bcd6d4-b95lz" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-b95lz webserver-deployment-6676bcd6d4- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-6676bcd6d4-b95lz d30cf9d5-ce73-48f2-afa4-4455d97a1a95 3512534 0 2020-05-11 20:55:14 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e96a5c93-dcd0-4f1e-b2a0-c90365bdc19d 0xc0023e7877 0xc0023e7878}] []  [{kube-controller-manager Update v1 2020-05-11 20:55:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 57 54 97 53 99 57 51 45 100 99 100 48 45 52 102 49 101 45 98 50 97 48 45 99 57 48 51 54 53 98 100 99 49 57 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.307: INFO: Pod "webserver-deployment-6676bcd6d4-bzmw5" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-bzmw5 webserver-deployment-6676bcd6d4- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-6676bcd6d4-bzmw5 767f534c-4e58-4523-ac2a-4ebf3e89a7c3 3512465 0 2020-05-11 20:55:09 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e96a5c93-dcd0-4f1e-b2a0-c90365bdc19d 0xc0023e7a47 0xc0023e7a48}] []  [{kube-controller-manager Update v1 2020-05-11 20:55:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 57 54 97 53 99 57 51 45 100 99 100 48 45 52 102 49 101 45 98 50 97 48 45 99 57 48 51 54 53 98 100 99 49 57 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 20:55:10 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-11 20:55:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.308: INFO: Pod "webserver-deployment-6676bcd6d4-cbj7g" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-cbj7g webserver-deployment-6676bcd6d4- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-6676bcd6d4-cbj7g 9166c183-5d9c-4194-ab9b-436454d82cc3 3512578 0 2020-05-11 20:55:14 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e96a5c93-dcd0-4f1e-b2a0-c90365bdc19d 0xc0023e7cc7 0xc0023e7cc8}] []  [{kube-controller-manager Update v1 2020-05-11 20:55:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 57 54 97 53 99 57 51 45 100 99 100 48 45 52 102 49 101 45 98 50 97 48 45 99 57 48 51 54 53 98 100 99 49 57 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 20:55:17 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-11 20:55:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.308: INFO: Pod "webserver-deployment-6676bcd6d4-cmnfq" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-cmnfq webserver-deployment-6676bcd6d4- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-6676bcd6d4-cmnfq 4c5c1113-51ab-4533-9564-6af0846d5521 3512455 0 2020-05-11 20:55:09 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e96a5c93-dcd0-4f1e-b2a0-c90365bdc19d 0xc003150097 0xc003150098}] []  [{kube-controller-manager Update v1 2020-05-11 20:55:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 57 54 97 53 99 57 51 45 100 99 100 48 45 52 102 49 101 45 98 50 97 48 45 99 57 48 51 54 53 98 100 99 49 57 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 20:55:09 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-11 20:55:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.308: INFO: Pod "webserver-deployment-6676bcd6d4-jjfxq" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-jjfxq webserver-deployment-6676bcd6d4- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-6676bcd6d4-jjfxq 3212ee3e-3687-442f-9470-e239cdfa7945 3512445 0 2020-05-11 20:55:09 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e96a5c93-dcd0-4f1e-b2a0-c90365bdc19d 0xc003150247 0xc003150248}] []  [{kube-controller-manager Update v1 2020-05-11 20:55:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 57 54 97 53 99 57 51 45 100 99 100 48 45 52 102 49 101 45 98 50 97 48 45 99 57 48 51 54 53 98 100 99 49 57 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 20:55:09 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-11 20:55:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.309: INFO: Pod "webserver-deployment-6676bcd6d4-mllk8" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-mllk8 webserver-deployment-6676bcd6d4- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-6676bcd6d4-mllk8 e0ab1e9a-6295-4c2c-a17b-0625303a5a53 3512598 0 2020-05-11 20:55:14 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e96a5c93-dcd0-4f1e-b2a0-c90365bdc19d 0xc0031503f7 0xc0031503f8}] []  [{kube-controller-manager Update v1 2020-05-11 20:55:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 57 54 97 53 99 57 51 45 100 99 100 48 45 52 102 49 101 45 98 50 97 48 45 99 57 48 51 54 53 98 100 99 49 57 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 20:55:19 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-11 20:55:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.309: INFO: Pod "webserver-deployment-6676bcd6d4-nf6v8" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-nf6v8 webserver-deployment-6676bcd6d4- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-6676bcd6d4-nf6v8 858a1ded-559e-4389-b792-ec0e74e7c3ce 3512524 0 2020-05-11 20:55:14 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e96a5c93-dcd0-4f1e-b2a0-c90365bdc19d 0xc0031505a7 0xc0031505a8}] []  [{kube-controller-manager Update v1 2020-05-11 20:55:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 57 54 97 53 99 57 51 45 100 99 100 48 45 52 102 49 101 45 98 50 97 48 45 99 57 48 51 54 53 98 100 99 49 57 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.309: INFO: Pod "webserver-deployment-6676bcd6d4-t5887" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-t5887 webserver-deployment-6676bcd6d4- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-6676bcd6d4-t5887 59353ff2-8dfa-4c72-b2a2-864e25120504 3512530 0 2020-05-11 20:55:14 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e96a5c93-dcd0-4f1e-b2a0-c90365bdc19d 0xc0031506e7 0xc0031506e8}] []  [{kube-controller-manager Update v1 2020-05-11 20:55:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 57 54 97 53 99 57 51 45 100 99 100 48 45 52 102 49 101 45 98 50 97 48 45 99 57 48 51 54 53 98 100 99 49 57 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.310: INFO: Pod "webserver-deployment-6676bcd6d4-wdhts" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-wdhts webserver-deployment-6676bcd6d4- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-6676bcd6d4-wdhts d1c41a7e-ee82-4f85-8b78-5b20d147e888 3512470 0 2020-05-11 20:55:09 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e96a5c93-dcd0-4f1e-b2a0-c90365bdc19d 0xc003150827 0xc003150828}] []  [{kube-controller-manager Update v1 2020-05-11 20:55:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 57 54 97 53 99 57 51 45 100 99 100 48 45 52 102 49 101 45 98 50 97 48 45 99 57 48 51 54 53 98 100 99 49 57 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 20:55:12 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-11 20:55:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.310: INFO: Pod "webserver-deployment-6676bcd6d4-wj54f" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-wj54f webserver-deployment-6676bcd6d4- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-6676bcd6d4-wj54f 629f9635-8ded-4f0a-b2c5-71a822d368c8 3512567 0 2020-05-11 20:55:14 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e96a5c93-dcd0-4f1e-b2a0-c90365bdc19d 0xc0031509d7 0xc0031509d8}] []  [{kube-controller-manager Update v1 2020-05-11 20:55:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 57 54 97 53 99 57 51 45 100 99 100 48 45 52 102 49 101 45 98 50 97 48 45 99 57 48 51 54 53 98 100 99 49 57 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 20:55:16 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-11 20:55:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.310: INFO: Pod "webserver-deployment-6676bcd6d4-wtkts" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-wtkts webserver-deployment-6676bcd6d4- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-6676bcd6d4-wtkts 4f6af37d-9730-4cb6-bd24-2dc337333104 3512592 0 2020-05-11 20:55:14 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e96a5c93-dcd0-4f1e-b2a0-c90365bdc19d 0xc003150b97 0xc003150b98}] []  [{kube-controller-manager Update v1 2020-05-11 20:55:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 57 54 97 53 99 57 51 45 100 99 100 48 45 52 102 49 101 45 98 50 97 48 45 99 57 48 51 54 53 98 100 99 49 57 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 20:55:19 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-11 20:55:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.311: INFO: Pod "webserver-deployment-6676bcd6d4-z4n8f" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-z4n8f webserver-deployment-6676bcd6d4- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-6676bcd6d4-z4n8f e36d2f23-e7d3-4637-b342-94cbcc229e7c 3512553 0 2020-05-11 20:55:14 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e96a5c93-dcd0-4f1e-b2a0-c90365bdc19d 0xc003150d67 0xc003150d68}] []  [{kube-controller-manager Update v1 2020-05-11 20:55:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 57 54 97 53 99 57 51 45 100 99 100 48 45 52 102 49 101 45 98 50 97 48 45 99 57 48 51 54 53 98 100 99 49 57 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 20:55:15 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-11 20:55:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.311: INFO: Pod "webserver-deployment-84855cf797-5547j" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-5547j webserver-deployment-84855cf797- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-84855cf797-5547j 2c4e3013-51ab-4b5b-a779-3c465a831372 3512337 0 2020-05-11 20:54:48 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 50f093e1-149d-46c9-946b-e60fe9a9ec80 0xc003150f37 0xc003150f38}] []  [{kube-controller-manager Update v1 2020-05-11 20:54:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 48 102 48 57 51 101 49 45 49 52 57 100 45 52 54 99 57 45 57 52 54 98 45 101 54 48 102 101 57 97 57 101 99 56 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 20:55:00 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:54:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:54:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.22,StartTime:2020-05-11 20:54:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 20:54:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9ccbb0de6d63c97a1704d2e4ab8d0f538d31c52f888625e7b43b2357248b4d55,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.22,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.311: INFO: Pod "webserver-deployment-84855cf797-5wc4q" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-5wc4q webserver-deployment-84855cf797- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-84855cf797-5wc4q 0d2603ce-221c-445a-846c-806421cdf830 3512349 0 2020-05-11 20:54:48 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 50f093e1-149d-46c9-946b-e60fe9a9ec80 0xc0031510e7 0xc0031510e8}] []  [{kube-controller-manager Update v1 2020-05-11 20:54:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 48 102 48 57 51 101 49 45 49 52 57 100 45 52 54 99 57 45 57 52 54 98 45 101 54 48 102 101 57 97 57 101 99 56 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 20:55:01 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:54:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:54:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.20,StartTime:2020-05-11 20:54:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 20:54:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://073677a27cc6f3ea7c5909dd1aa3c319b4281fac7dd311ed0269b42160d0abc4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.20,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.311: INFO: Pod "webserver-deployment-84855cf797-7gvv4" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-7gvv4 webserver-deployment-84855cf797- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-84855cf797-7gvv4 3ae27967-08cd-498a-8134-de3c96f9a854 3512320 0 2020-05-11 20:54:48 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 50f093e1-149d-46c9-946b-e60fe9a9ec80 0xc003151297 0xc003151298}] []  [{kube-controller-manager Update v1 2020-05-11 20:54:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 48 102 48 57 51 101 49 45 49 52 57 100 45 52 54 99 57 45 57 52 54 98 45 101 54 48 102 101 57 97 57 101 99 56 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 20:54:57 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 54 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:54:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:54:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:54:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:54:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.62,StartTime:2020-05-11 20:54:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 20:54:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://81c8aa531951055c4e28f5c076a416a7a0db7b8c13b2ff7a24b19df5b810cecd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.62,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.312: INFO: Pod "webserver-deployment-84855cf797-bjggj" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-bjggj webserver-deployment-84855cf797- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-84855cf797-bjggj de493201-5cbd-4af7-9159-8b677dac7f54 3512572 0 2020-05-11 20:55:14 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 50f093e1-149d-46c9-946b-e60fe9a9ec80 0xc003151447 0xc003151448}] []  [{kube-controller-manager Update v1 2020-05-11 20:55:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 48 102 48 57 51 101 49 45 49 52 57 100 45 52 54 99 57 45 57 52 54 98 45 101 54 48 102 101 57 97 57 101 99 56 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 20:55:16 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-11 20:55:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.312: INFO: Pod "webserver-deployment-84855cf797-cchwc" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-cchwc webserver-deployment-84855cf797- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-84855cf797-cchwc 266f0485-c2b5-43d9-80ad-0714764fd76c 3512531 0 2020-05-11 20:55:14 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 50f093e1-149d-46c9-946b-e60fe9a9ec80 0xc0031515d7 0xc0031515d8}] []  [{kube-controller-manager Update v1 2020-05-11 20:55:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 48 102 48 57 51 101 49 45 49 52 57 100 45 52 54 99 57 45 57 52 54 98 45 101 54 48 102 101 57 97 57 101 99 56 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.312: INFO: Pod "webserver-deployment-84855cf797-ddlpp" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-ddlpp webserver-deployment-84855cf797- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-84855cf797-ddlpp f83e2f2b-4351-4794-9ac9-f110a4d3de8e 3512585 0 2020-05-11 20:55:14 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 50f093e1-149d-46c9-946b-e60fe9a9ec80 0xc003151707 0xc003151708}] []  [{kube-controller-manager Update v1 2020-05-11 20:55:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 48 102 48 57 51 101 49 45 49 52 57 100 45 52 54 99 57 45 57 52 54 98 45 101 54 48 102 101 57 97 57 101 99 56 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 20:55:17 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-11 20:55:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.312: INFO: Pod "webserver-deployment-84855cf797-h8v6g" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-h8v6g webserver-deployment-84855cf797- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-84855cf797-h8v6g 9042803c-a0b1-4099-9edc-192e4046db7d 3512533 0 2020-05-11 20:55:14 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 50f093e1-149d-46c9-946b-e60fe9a9ec80 0xc003151897 0xc003151898}] []  [{kube-controller-manager Update v1 2020-05-11 20:55:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 48 102 48 57 51 101 49 45 49 52 57 100 45 52 54 99 57 45 57 52 54 98 45 101 54 48 102 101 57 97 57 101 99 56 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.313: INFO: Pod "webserver-deployment-84855cf797-lqlc8" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-lqlc8 webserver-deployment-84855cf797- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-84855cf797-lqlc8 10019d9e-f923-40af-8f9a-62b456a466bc 3512538 0 2020-05-11 20:55:14 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 50f093e1-149d-46c9-946b-e60fe9a9ec80 0xc0031519c7 0xc0031519c8}] []  [{kube-controller-manager Update v1 2020-05-11 20:55:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 48 102 48 57 51 101 49 45 49 52 57 100 45 52 54 99 57 45 57 52 54 98 45 101 54 48 102 101 57 97 57 101 99 56 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 20:55:14 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-11 20:55:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.313: INFO: Pod "webserver-deployment-84855cf797-lv94m" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-lv94m webserver-deployment-84855cf797- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-84855cf797-lv94m 5dcaf911-1f20-4303-ac91-7e0990fd2f3d 3512556 0 2020-05-11 20:55:14 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 50f093e1-149d-46c9-946b-e60fe9a9ec80 0xc003151b57 0xc003151b58}] []  [{kube-controller-manager Update v1 2020-05-11 20:55:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 48 102 48 57 51 101 49 45 49 52 57 100 45 52 54 99 57 45 57 52 54 98 45 101 54 48 102 101 57 97 57 101 99 56 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 20:55:15 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-11 20:55:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.313: INFO: Pod "webserver-deployment-84855cf797-m5nll" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-m5nll webserver-deployment-84855cf797- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-84855cf797-m5nll 3fe08b0e-d5bd-4ff1-9c8a-577f2534fbe3 3512539 0 2020-05-11 20:55:14 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 50f093e1-149d-46c9-946b-e60fe9a9ec80 0xc003151ce7 0xc003151ce8}] []  [{kube-controller-manager Update v1 2020-05-11 20:55:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 48 102 48 57 51 101 49 45 49 52 57 100 45 52 54 99 57 45 57 52 54 98 45 101 54 48 102 101 57 97 57 101 99 56 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 20:55:14 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-11 20:55:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.313: INFO: Pod "webserver-deployment-84855cf797-nmmmm" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-nmmmm webserver-deployment-84855cf797- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-84855cf797-nmmmm 98d5ab39-f70d-4b49-8572-f95566875278 3512546 0 2020-05-11 20:55:14 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 50f093e1-149d-46c9-946b-e60fe9a9ec80 0xc003151e87 0xc003151e88}] []  [{kube-controller-manager Update v1 2020-05-11 20:55:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 48 102 48 57 51 101 49 45 49 52 57 100 45 52 54 99 57 45 57 52 54 98 45 101 54 48 102 101 57 97 57 101 99 56 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 20:55:15 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-11 20:55:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.314: INFO: Pod "webserver-deployment-84855cf797-p5zs6" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-p5zs6 webserver-deployment-84855cf797- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-84855cf797-p5zs6 f3fa1225-d044-4434-9dc6-f974145579a4 3512368 0 2020-05-11 20:54:48 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 50f093e1-149d-46c9-946b-e60fe9a9ec80 0xc000af2017 0xc000af2018}] []  [{kube-controller-manager Update v1 2020-05-11 20:54:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 48 102 48 57 51 101 49 45 49 52 57 100 45 52 54 99 57 45 57 52 54 98 45 101 54 48 102 101 57 97 57 101 99 56 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 20:55:03 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 54 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:54:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:54:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.64,StartTime:2020-05-11 20:54:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 20:55:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://bd7257ec075569cf9086b99b134c53ecf579bc0a4352f9db25f972ce9f63ab7c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.64,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.314: INFO: Pod "webserver-deployment-84855cf797-pnflm" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-pnflm webserver-deployment-84855cf797- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-84855cf797-pnflm 267d1e49-913a-44d7-a935-07e4be7b5aea 3512532 0 2020-05-11 20:55:14 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 50f093e1-149d-46c9-946b-e60fe9a9ec80 0xc000af21c7 0xc000af21c8}] []  [{kube-controller-manager Update v1 2020-05-11 20:55:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 48 102 48 57 51 101 49 45 49 52 57 100 45 52 54 99 57 45 57 52 54 98 45 101 54 48 102 101 57 97 57 101 99 56 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.314: INFO: Pod "webserver-deployment-84855cf797-rqd6r" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-rqd6r webserver-deployment-84855cf797- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-84855cf797-rqd6r d8396aa6-50e6-46c1-8a69-838a30cb2d1c 3512548 0 2020-05-11 20:55:14 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 50f093e1-149d-46c9-946b-e60fe9a9ec80 0xc000af22f7 0xc000af22f8}] []  [{kube-controller-manager Update v1 2020-05-11 20:55:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 48 102 48 57 51 101 49 45 49 52 57 100 45 52 54 99 57 45 57 52 54 98 45 101 54 48 102 101 57 97 57 101 99 56 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 20:55:15 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-11 20:55:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.314: INFO: Pod "webserver-deployment-84855cf797-sdb5t" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-sdb5t webserver-deployment-84855cf797- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-84855cf797-sdb5t 9338a99a-2053-4a46-bee3-5207d4bc191a 3512529 0 2020-05-11 20:55:14 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 50f093e1-149d-46c9-946b-e60fe9a9ec80 0xc000af2487 0xc000af2488}] []  [{kube-controller-manager Update v1 2020-05-11 20:55:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 48 102 48 57 51 101 49 45 49 52 57 100 45 52 54 99 57 45 57 52 54 98 45 101 54 48 102 101 57 97 57 101 99 56 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.315: INFO: Pod "webserver-deployment-84855cf797-t26nl" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-t26nl webserver-deployment-84855cf797- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-84855cf797-t26nl 36a5cec9-e7f9-49f2-99ed-cf8c80e745dd 3512348 0 2020-05-11 20:54:48 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 50f093e1-149d-46c9-946b-e60fe9a9ec80 0xc000af25b7 0xc000af25b8}] []  [{kube-controller-manager Update v1 2020-05-11 20:54:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 48 102 48 57 51 101 49 45 49 52 57 100 45 52 54 99 57 45 57 52 54 98 45 101 54 48 102 101 57 97 57 101 99 56 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 20:55:01 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 54 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:54:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:54:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.63,StartTime:2020-05-11 20:54:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 20:54:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://16392927a4f5b1d791d48c618ee94b819e1fc0c70dd5884fcea00dd8a7779f5c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.63,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.315: INFO: Pod "webserver-deployment-84855cf797-tcts6" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-tcts6 webserver-deployment-84855cf797- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-84855cf797-tcts6 83314980-34cf-4134-82de-c75b200b16a8 3512384 0 2020-05-11 20:54:48 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 50f093e1-149d-46c9-946b-e60fe9a9ec80 0xc000af2767 0xc000af2768}] []  [{kube-controller-manager Update v1 2020-05-11 20:54:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 48 102 48 57 51 101 49 45 49 52 57 100 45 52 54 99 57 45 57 52 54 98 45 101 54 48 102 101 57 97 57 101 99 56 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 20:55:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 54 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:54:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:54:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.65,StartTime:2020-05-11 20:54:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 20:55:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://12ffd00bcae596992e78bee046333899d1215155909dc08bd16458bbd1e047ee,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.65,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.315: INFO: Pod "webserver-deployment-84855cf797-xnm8c" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-xnm8c webserver-deployment-84855cf797- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-84855cf797-xnm8c a8d5d7b2-9fe2-4686-8dd8-34bc4a8386c6 3512343 0 2020-05-11 20:54:48 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 50f093e1-149d-46c9-946b-e60fe9a9ec80 0xc000af2917 0xc000af2918}] []  [{kube-controller-manager Update v1 2020-05-11 20:54:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 48 102 48 57 51 101 49 45 49 52 57 100 45 52 54 99 57 45 57 52 54 98 45 101 54 48 102 101 57 97 57 101 99 56 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 20:55:00 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:54:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:54:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.21,StartTime:2020-05-11 20:54:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 20:54:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://37b981ea38973deb648f2a0487e32cef62f353f55f0c0d4864762138c1b14476,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.21,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.316: INFO: Pod "webserver-deployment-84855cf797-z6rsc" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-z6rsc webserver-deployment-84855cf797- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-84855cf797-z6rsc 70df5eb5-9ebc-41a2-b67d-4767e2a751d4 3512309 0 2020-05-11 20:54:48 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 50f093e1-149d-46c9-946b-e60fe9a9ec80 0xc000af2ae7 0xc000af2ae8}] []  [{kube-controller-manager Update v1 2020-05-11 20:54:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 48 102 48 57 51 101 49 45 49 52 57 100 45 52 54 99 57 45 57 52 54 98 45 101 54 48 102 101 57 97 57 101 99 56 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 20:54:55 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:54:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:54:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:54:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:54:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.19,StartTime:2020-05-11 20:54:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 20:54:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e3297060d75d52d90dc10654fc9b8061f838aa0e7af87e08f3297344d39958f1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.19,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 11 20:55:20.316: INFO: Pod "webserver-deployment-84855cf797-zpq75" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-zpq75 webserver-deployment-84855cf797- deployment-3877 /api/v1/namespaces/deployment-3877/pods/webserver-deployment-84855cf797-zpq75 a0235d12-1ec6-4bb9-8965-2ccc0cfbb11a 3512526 0 2020-05-11 20:55:14 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 50f093e1-149d-46c9-946b-e60fe9a9ec80 0xc000af2c97 0xc000af2c98}] []  [{kube-controller-manager Update v1 2020-05-11 20:55:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 48 102 48 57 51 101 49 45 49 52 57 100 45 52 54 99 57 45 57 52 54 98 45 101 54 48 102 101 57 97 57 101 99 56 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c968m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c968m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c968m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:55:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 20:55:20.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3877" for this suite.

• [SLOW TEST:33.402 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":40,"skipped":739,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 20:55:21.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-de68815f-078e-4dc6-aef8-c6dae48d3a66 in namespace container-probe-1083
May 11 20:55:43.801: INFO: Started pod liveness-de68815f-078e-4dc6-aef8-c6dae48d3a66 in namespace container-probe-1083
STEP: checking the pod's current state and verifying that restartCount is present
May 11 20:55:43.803: INFO: Initial restart count of pod liveness-de68815f-078e-4dc6-aef8-c6dae48d3a66 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 20:59:45.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1083" for this suite.

• [SLOW TEST:264.219 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":41,"skipped":747,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 20:59:45.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
May 11 20:59:47.172: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 20:59:47.419: INFO: Number of nodes with available pods: 0
May 11 20:59:47.419: INFO: Node kali-worker is running more than one daemon pod
May 11 20:59:48.465: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 20:59:48.686: INFO: Number of nodes with available pods: 0
May 11 20:59:48.686: INFO: Node kali-worker is running more than one daemon pod
May 11 20:59:49.453: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 20:59:49.455: INFO: Number of nodes with available pods: 0
May 11 20:59:49.455: INFO: Node kali-worker is running more than one daemon pod
May 11 20:59:51.121: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 20:59:51.126: INFO: Number of nodes with available pods: 0
May 11 20:59:51.126: INFO: Node kali-worker is running more than one daemon pod
May 11 20:59:52.101: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 20:59:52.477: INFO: Number of nodes with available pods: 0
May 11 20:59:52.477: INFO: Node kali-worker is running more than one daemon pod
May 11 20:59:53.424: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 20:59:53.428: INFO: Number of nodes with available pods: 1
May 11 20:59:53.428: INFO: Node kali-worker is running more than one daemon pod
May 11 20:59:54.425: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 20:59:54.430: INFO: Number of nodes with available pods: 2
May 11 20:59:54.430: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
May 11 20:59:54.484: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 20:59:54.500: INFO: Number of nodes with available pods: 2
May 11 20:59:54.500: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4924, will wait for the garbage collector to delete the pods
May 11 20:59:55.616: INFO: Deleting DaemonSet.extensions daemon-set took: 6.460537ms
May 11 20:59:56.216: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.418289ms
May 11 21:00:03.818: INFO: Number of nodes with available pods: 0
May 11 21:00:03.819: INFO: Number of running nodes: 0, number of available pods: 0
May 11 21:00:03.820: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4924/daemonsets","resourceVersion":"3513766"},"items":null}

May 11 21:00:03.822: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4924/pods","resourceVersion":"3513766"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:00:03.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4924" for this suite.

• [SLOW TEST:18.107 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":42,"skipped":767,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:00:03.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-projected-all-test-volume-09b14295-bca9-4bed-a336-9374e2f25de4
STEP: Creating secret with name secret-projected-all-test-volume-2aea7389-30d1-4596-9d81-fed7073eeebc
STEP: Creating a pod to test Check all projections for projected volume plugin
May 11 21:00:03.984: INFO: Waiting up to 5m0s for pod "projected-volume-3c6cd686-c22f-4ba6-aea5-6349ab073d3b" in namespace "projected-2510" to be "Succeeded or Failed"
May 11 21:00:04.000: INFO: Pod "projected-volume-3c6cd686-c22f-4ba6-aea5-6349ab073d3b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.290288ms
May 11 21:00:06.031: INFO: Pod "projected-volume-3c6cd686-c22f-4ba6-aea5-6349ab073d3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046307096s
May 11 21:00:08.376: INFO: Pod "projected-volume-3c6cd686-c22f-4ba6-aea5-6349ab073d3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.391371708s
May 11 21:00:10.390: INFO: Pod "projected-volume-3c6cd686-c22f-4ba6-aea5-6349ab073d3b": Phase="Running", Reason="", readiness=true. Elapsed: 6.405384496s
May 11 21:00:12.393: INFO: Pod "projected-volume-3c6cd686-c22f-4ba6-aea5-6349ab073d3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.409120329s
STEP: Saw pod success
May 11 21:00:12.394: INFO: Pod "projected-volume-3c6cd686-c22f-4ba6-aea5-6349ab073d3b" satisfied condition "Succeeded or Failed"
May 11 21:00:12.396: INFO: Trying to get logs from node kali-worker pod projected-volume-3c6cd686-c22f-4ba6-aea5-6349ab073d3b container projected-all-volume-test: 
STEP: delete the pod
May 11 21:00:12.477: INFO: Waiting for pod projected-volume-3c6cd686-c22f-4ba6-aea5-6349ab073d3b to disappear
May 11 21:00:12.481: INFO: Pod projected-volume-3c6cd686-c22f-4ba6-aea5-6349ab073d3b no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:00:12.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2510" for this suite.

• [SLOW TEST:8.653 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":43,"skipped":791,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:00:12.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:00:29.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7304" for this suite.

• [SLOW TEST:17.065 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":44,"skipped":802,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:00:29.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 11 21:00:30.357: INFO: Waiting up to 5m0s for pod "downwardapi-volume-048fcade-bf7c-4b69-8b9d-92e55b6731e3" in namespace "projected-2614" to be "Succeeded or Failed"
May 11 21:00:30.378: INFO: Pod "downwardapi-volume-048fcade-bf7c-4b69-8b9d-92e55b6731e3": Phase="Pending", Reason="", readiness=false. Elapsed: 21.28007ms
May 11 21:00:32.588: INFO: Pod "downwardapi-volume-048fcade-bf7c-4b69-8b9d-92e55b6731e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230561296s
May 11 21:00:34.639: INFO: Pod "downwardapi-volume-048fcade-bf7c-4b69-8b9d-92e55b6731e3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.28150821s
May 11 21:00:36.642: INFO: Pod "downwardapi-volume-048fcade-bf7c-4b69-8b9d-92e55b6731e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.284393635s
STEP: Saw pod success
May 11 21:00:36.642: INFO: Pod "downwardapi-volume-048fcade-bf7c-4b69-8b9d-92e55b6731e3" satisfied condition "Succeeded or Failed"
May 11 21:00:36.644: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-048fcade-bf7c-4b69-8b9d-92e55b6731e3 container client-container: 
STEP: delete the pod
May 11 21:00:37.106: INFO: Waiting for pod downwardapi-volume-048fcade-bf7c-4b69-8b9d-92e55b6731e3 to disappear
May 11 21:00:37.120: INFO: Pod downwardapi-volume-048fcade-bf7c-4b69-8b9d-92e55b6731e3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:00:37.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2614" for this suite.

• [SLOW TEST:7.574 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":45,"skipped":807,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:00:37.130: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0511 21:00:47.222917       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May 11 21:00:47.222: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:00:47.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4623" for this suite.

• [SLOW TEST:10.097 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":46,"skipped":818,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:00:47.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
May 11 21:00:47.311: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7614'
May 11 21:00:58.491: INFO: stderr: ""
May 11 21:00:58.491: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
May 11 21:00:58.491: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7614'
May 11 21:00:58.715: INFO: stderr: ""
May 11 21:00:58.715: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
May 11 21:01:03.715: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7614'
May 11 21:01:04.055: INFO: stderr: ""
May 11 21:01:04.055: INFO: stdout: "update-demo-nautilus-9rwb5 update-demo-nautilus-sz86b "
May 11 21:01:04.055: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9rwb5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7614'
May 11 21:01:04.470: INFO: stderr: ""
May 11 21:01:04.470: INFO: stdout: ""
May 11 21:01:04.470: INFO: update-demo-nautilus-9rwb5 is created but not running
May 11 21:01:09.470: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7614'
May 11 21:01:09.573: INFO: stderr: ""
May 11 21:01:09.573: INFO: stdout: "update-demo-nautilus-9rwb5 update-demo-nautilus-sz86b "
May 11 21:01:09.573: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9rwb5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7614'
May 11 21:01:09.670: INFO: stderr: ""
May 11 21:01:09.670: INFO: stdout: "true"
May 11 21:01:09.670: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9rwb5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7614'
May 11 21:01:09.765: INFO: stderr: ""
May 11 21:01:09.765: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May 11 21:01:09.765: INFO: validating pod update-demo-nautilus-9rwb5
May 11 21:01:09.769: INFO: got data: {
  "image": "nautilus.jpg"
}

May 11 21:01:09.769: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May 11 21:01:09.769: INFO: update-demo-nautilus-9rwb5 is verified up and running
May 11 21:01:09.769: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sz86b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7614'
May 11 21:01:09.850: INFO: stderr: ""
May 11 21:01:09.850: INFO: stdout: "true"
May 11 21:01:09.850: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sz86b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7614'
May 11 21:01:10.163: INFO: stderr: ""
May 11 21:01:10.163: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May 11 21:01:10.163: INFO: validating pod update-demo-nautilus-sz86b
May 11 21:01:10.203: INFO: got data: {
  "image": "nautilus.jpg"
}

May 11 21:01:10.203: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May 11 21:01:10.203: INFO: update-demo-nautilus-sz86b is verified up and running
STEP: scaling down the replication controller
May 11 21:01:10.949: INFO: scanned /root for discovery docs: 
May 11 21:01:10.949: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7614'
May 11 21:01:12.837: INFO: stderr: ""
May 11 21:01:12.837: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
May 11 21:01:12.837: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7614'
May 11 21:01:13.479: INFO: stderr: ""
May 11 21:01:13.479: INFO: stdout: "update-demo-nautilus-9rwb5 update-demo-nautilus-sz86b "
STEP: Replicas for name=update-demo: expected=1 actual=2
May 11 21:01:18.479: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7614'
May 11 21:01:18.591: INFO: stderr: ""
May 11 21:01:18.591: INFO: stdout: "update-demo-nautilus-9rwb5 "
May 11 21:01:18.591: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9rwb5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7614'
May 11 21:01:18.679: INFO: stderr: ""
May 11 21:01:18.679: INFO: stdout: "true"
May 11 21:01:18.679: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9rwb5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7614'
May 11 21:01:18.945: INFO: stderr: ""
May 11 21:01:18.945: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May 11 21:01:18.945: INFO: validating pod update-demo-nautilus-9rwb5
May 11 21:01:18.949: INFO: got data: {
  "image": "nautilus.jpg"
}

May 11 21:01:18.949: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May 11 21:01:18.949: INFO: update-demo-nautilus-9rwb5 is verified up and running
STEP: scaling up the replication controller
May 11 21:01:18.951: INFO: scanned /root for discovery docs: 
May 11 21:01:18.951: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7614'
May 11 21:01:20.252: INFO: stderr: ""
May 11 21:01:20.252: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
May 11 21:01:20.253: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7614'
May 11 21:01:20.430: INFO: stderr: ""
May 11 21:01:20.430: INFO: stdout: "update-demo-nautilus-9rwb5 update-demo-nautilus-plw4w "
May 11 21:01:20.430: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9rwb5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7614'
May 11 21:01:20.511: INFO: stderr: ""
May 11 21:01:20.511: INFO: stdout: "true"
May 11 21:01:20.511: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9rwb5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7614'
May 11 21:01:20.618: INFO: stderr: ""
May 11 21:01:20.618: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May 11 21:01:20.618: INFO: validating pod update-demo-nautilus-9rwb5
May 11 21:01:20.620: INFO: got data: {
  "image": "nautilus.jpg"
}

May 11 21:01:20.620: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May 11 21:01:20.620: INFO: update-demo-nautilus-9rwb5 is verified up and running
May 11 21:01:20.620: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-plw4w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7614'
May 11 21:01:20.715: INFO: stderr: ""
May 11 21:01:20.715: INFO: stdout: ""
May 11 21:01:20.715: INFO: update-demo-nautilus-plw4w is created but not running
May 11 21:01:25.715: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7614'
May 11 21:01:25.812: INFO: stderr: ""
May 11 21:01:25.812: INFO: stdout: "update-demo-nautilus-9rwb5 update-demo-nautilus-plw4w "
May 11 21:01:25.812: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9rwb5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7614'
May 11 21:01:25.905: INFO: stderr: ""
May 11 21:01:25.905: INFO: stdout: "true"
May 11 21:01:25.905: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9rwb5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7614'
May 11 21:01:26.213: INFO: stderr: ""
May 11 21:01:26.213: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May 11 21:01:26.213: INFO: validating pod update-demo-nautilus-9rwb5
May 11 21:01:26.216: INFO: got data: {
  "image": "nautilus.jpg"
}

May 11 21:01:26.216: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May 11 21:01:26.216: INFO: update-demo-nautilus-9rwb5 is verified up and running
May 11 21:01:26.217: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-plw4w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7614'
May 11 21:01:26.295: INFO: stderr: ""
May 11 21:01:26.295: INFO: stdout: "true"
May 11 21:01:26.295: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-plw4w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7614'
May 11 21:01:26.384: INFO: stderr: ""
May 11 21:01:26.384: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May 11 21:01:26.384: INFO: validating pod update-demo-nautilus-plw4w
May 11 21:01:26.387: INFO: got data: {
  "image": "nautilus.jpg"
}

May 11 21:01:26.387: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May 11 21:01:26.387: INFO: update-demo-nautilus-plw4w is verified up and running
STEP: using delete to clean up resources
May 11 21:01:26.387: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7614'
May 11 21:01:26.947: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 11 21:01:26.947: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
May 11 21:01:26.947: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7614'
May 11 21:01:27.062: INFO: stderr: "No resources found in kubectl-7614 namespace.\n"
May 11 21:01:27.062: INFO: stdout: ""
May 11 21:01:27.062: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7614 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May 11 21:01:27.675: INFO: stderr: ""
May 11 21:01:27.675: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:01:27.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7614" for this suite.

• [SLOW TEST:40.859 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":275,"completed":47,"skipped":838,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:01:28.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
May 11 21:01:28.716: INFO: Waiting up to 5m0s for pod "pod-f7f89953-de97-4945-8e0c-84f58ca4ed60" in namespace "emptydir-562" to be "Succeeded or Failed"
May 11 21:01:29.591: INFO: Pod "pod-f7f89953-de97-4945-8e0c-84f58ca4ed60": Phase="Pending", Reason="", readiness=false. Elapsed: 875.365571ms
May 11 21:01:31.621: INFO: Pod "pod-f7f89953-de97-4945-8e0c-84f58ca4ed60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.904453551s
May 11 21:01:33.897: INFO: Pod "pod-f7f89953-de97-4945-8e0c-84f58ca4ed60": Phase="Pending", Reason="", readiness=false. Elapsed: 5.181118237s
May 11 21:01:35.901: INFO: Pod "pod-f7f89953-de97-4945-8e0c-84f58ca4ed60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.18439996s
STEP: Saw pod success
May 11 21:01:35.901: INFO: Pod "pod-f7f89953-de97-4945-8e0c-84f58ca4ed60" satisfied condition "Succeeded or Failed"
May 11 21:01:35.903: INFO: Trying to get logs from node kali-worker pod pod-f7f89953-de97-4945-8e0c-84f58ca4ed60 container test-container: 
STEP: delete the pod
May 11 21:01:36.113: INFO: Waiting for pod pod-f7f89953-de97-4945-8e0c-84f58ca4ed60 to disappear
May 11 21:01:36.124: INFO: Pod pod-f7f89953-de97-4945-8e0c-84f58ca4ed60 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:01:36.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-562" for this suite.

• [SLOW TEST:8.045 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":48,"skipped":848,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:01:36.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
May 11 21:01:38.344: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
May 11 21:01:40.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827698, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827698, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827698, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827698, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:01:42.873: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827698, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827698, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827698, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827698, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 11 21:01:45.723: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 21:01:45.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:01:49.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-9147" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:13.621 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":49,"skipped":864,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:01:49.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
May 11 21:01:57.924: INFO: &Pod{ObjectMeta:{send-events-ee2b1d5a-eff5-4275-9b92-dc12bc5dd689  events-5 /api/v1/namespaces/events-5/pods/send-events-ee2b1d5a-eff5-4275-9b92-dc12bc5dd689 ff159980-df0f-44e3-965b-079a346ee3c6 3514387 0 2020-05-11 21:01:49 +0000 UTC   map[name:foo time:804276551] map[] [] []  [{e2e.test Update v1 2020-05-11 21:01:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 116 105 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 112 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 99 111 110 116 97 105 110 101 114 80 111 114 116 92 34 58 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 80 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 125 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 21:01:56 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 52 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sx52q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sx52q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sx52q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 21:01:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 21:01:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 21:01:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 21:01:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.44,StartTime:2020-05-11 21:01:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 21:01:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://7458cd4baf487969734ea5f4198f2ea712656cd0f113e7046b4dc0a53747b032,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.44,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
May 11 21:01:59.928: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
May 11 21:02:01.932: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:02:01.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-5" for this suite.

• [SLOW TEST:12.278 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":275,"completed":50,"skipped":960,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:02:02.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-3e73dc78-fad0-4b22-97fc-9791d4744f44 in namespace container-probe-3923
May 11 21:02:10.651: INFO: Started pod busybox-3e73dc78-fad0-4b22-97fc-9791d4744f44 in namespace container-probe-3923
STEP: checking the pod's current state and verifying that restartCount is present
May 11 21:02:10.652: INFO: Initial restart count of pod busybox-3e73dc78-fad0-4b22-97fc-9791d4744f44 is 0
May 11 21:02:59.245: INFO: Restart count of pod container-probe-3923/busybox-3e73dc78-fad0-4b22-97fc-9791d4744f44 is now 1 (48.592147073s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:02:59.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3923" for this suite.

• [SLOW TEST:58.119 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":51,"skipped":973,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:03:00.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 11 21:03:04.454: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 11 21:03:06.656: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827784, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827784, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827785, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827784, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:03:08.813: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827784, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827784, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827785, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827784, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:03:10.670: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827784, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827784, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827785, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827784, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 11 21:03:13.920: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 21:03:13.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1191-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:03:16.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5284" for this suite.
STEP: Destroying namespace "webhook-5284-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:16.964 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":52,"skipped":975,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:03:17.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
May 11 21:03:17.249: INFO: Waiting up to 5m0s for pod "pod-69d17009-e66f-4e3a-b7e0-2931d4e15fcb" in namespace "emptydir-9915" to be "Succeeded or Failed"
May 11 21:03:17.442: INFO: Pod "pod-69d17009-e66f-4e3a-b7e0-2931d4e15fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 192.682197ms
May 11 21:03:19.496: INFO: Pod "pod-69d17009-e66f-4e3a-b7e0-2931d4e15fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.247172428s
May 11 21:03:21.599: INFO: Pod "pod-69d17009-e66f-4e3a-b7e0-2931d4e15fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.350289153s
May 11 21:03:23.719: INFO: Pod "pod-69d17009-e66f-4e3a-b7e0-2931d4e15fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.470644407s
May 11 21:03:25.965: INFO: Pod "pod-69d17009-e66f-4e3a-b7e0-2931d4e15fcb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.716665662s
STEP: Saw pod success
May 11 21:03:25.966: INFO: Pod "pod-69d17009-e66f-4e3a-b7e0-2931d4e15fcb" satisfied condition "Succeeded or Failed"
May 11 21:03:25.968: INFO: Trying to get logs from node kali-worker2 pod pod-69d17009-e66f-4e3a-b7e0-2931d4e15fcb container test-container: 
STEP: delete the pod
May 11 21:03:26.738: INFO: Waiting for pod pod-69d17009-e66f-4e3a-b7e0-2931d4e15fcb to disappear
May 11 21:03:26.833: INFO: Pod pod-69d17009-e66f-4e3a-b7e0-2931d4e15fcb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:03:26.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9915" for this suite.

• [SLOW TEST:9.952 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":53,"skipped":976,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:03:27.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
May 11 21:03:28.318: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5480'
May 11 21:03:29.781: INFO: stderr: ""
May 11 21:03:29.781: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
May 11 21:03:29.781: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5480'
May 11 21:03:31.304: INFO: stderr: ""
May 11 21:03:31.304: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
May 11 21:03:36.304: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5480'
May 11 21:03:36.629: INFO: stderr: ""
May 11 21:03:36.629: INFO: stdout: "update-demo-nautilus-675v2 update-demo-nautilus-pqrlv "
May 11 21:03:36.629: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-675v2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5480'
May 11 21:03:36.867: INFO: stderr: ""
May 11 21:03:36.867: INFO: stdout: ""
May 11 21:03:36.867: INFO: update-demo-nautilus-675v2 is created but not running
May 11 21:03:41.867: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5480'
May 11 21:03:41.972: INFO: stderr: ""
May 11 21:03:41.972: INFO: stdout: "update-demo-nautilus-675v2 update-demo-nautilus-pqrlv "
May 11 21:03:41.972: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-675v2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5480'
May 11 21:03:42.068: INFO: stderr: ""
May 11 21:03:42.068: INFO: stdout: "true"
May 11 21:03:42.068: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-675v2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5480'
May 11 21:03:42.155: INFO: stderr: ""
May 11 21:03:42.155: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May 11 21:03:42.155: INFO: validating pod update-demo-nautilus-675v2
May 11 21:03:42.159: INFO: got data: {
  "image": "nautilus.jpg"
}

May 11 21:03:42.159: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May 11 21:03:42.159: INFO: update-demo-nautilus-675v2 is verified up and running
May 11 21:03:42.159: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pqrlv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5480'
May 11 21:03:42.310: INFO: stderr: ""
May 11 21:03:42.310: INFO: stdout: "true"
May 11 21:03:42.310: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pqrlv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5480'
May 11 21:03:42.397: INFO: stderr: ""
May 11 21:03:42.397: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May 11 21:03:42.397: INFO: validating pod update-demo-nautilus-pqrlv
May 11 21:03:42.401: INFO: got data: {
  "image": "nautilus.jpg"
}

May 11 21:03:42.401: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May 11 21:03:42.401: INFO: update-demo-nautilus-pqrlv is verified up and running
STEP: using delete to clean up resources
May 11 21:03:42.401: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5480'
May 11 21:03:42.964: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 11 21:03:42.964: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
May 11 21:03:42.964: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5480'
May 11 21:03:43.637: INFO: stderr: "No resources found in kubectl-5480 namespace.\n"
May 11 21:03:43.637: INFO: stdout: ""
May 11 21:03:43.638: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5480 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May 11 21:03:43.870: INFO: stderr: ""
May 11 21:03:43.870: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:03:43.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5480" for this suite.

• [SLOW TEST:16.808 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":275,"completed":54,"skipped":994,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:03:43.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
May 11 21:03:44.336: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-409 /api/v1/namespaces/watch-409/configmaps/e2e-watch-test-configmap-a 10c2ebfc-c08b-453f-a542-dc1bd75356dc 3514860 0 2020-05-11 21:03:44 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-11 21:03:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May 11 21:03:44.336: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-409 /api/v1/namespaces/watch-409/configmaps/e2e-watch-test-configmap-a 10c2ebfc-c08b-453f-a542-dc1bd75356dc 3514860 0 2020-05-11 21:03:44 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-11 21:03:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
May 11 21:03:54.344: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-409 /api/v1/namespaces/watch-409/configmaps/e2e-watch-test-configmap-a 10c2ebfc-c08b-453f-a542-dc1bd75356dc 3514911 0 2020-05-11 21:03:44 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-11 21:03:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
May 11 21:03:54.345: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-409 /api/v1/namespaces/watch-409/configmaps/e2e-watch-test-configmap-a 10c2ebfc-c08b-453f-a542-dc1bd75356dc 3514911 0 2020-05-11 21:03:44 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-11 21:03:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
May 11 21:04:04.352: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-409 /api/v1/namespaces/watch-409/configmaps/e2e-watch-test-configmap-a 10c2ebfc-c08b-453f-a542-dc1bd75356dc 3514939 0 2020-05-11 21:03:44 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-11 21:04:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May 11 21:04:04.352: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-409 /api/v1/namespaces/watch-409/configmaps/e2e-watch-test-configmap-a 10c2ebfc-c08b-453f-a542-dc1bd75356dc 3514939 0 2020-05-11 21:03:44 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-11 21:04:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
May 11 21:04:14.356: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-409 /api/v1/namespaces/watch-409/configmaps/e2e-watch-test-configmap-a 10c2ebfc-c08b-453f-a542-dc1bd75356dc 3514969 0 2020-05-11 21:03:44 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-11 21:04:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May 11 21:04:14.357: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-409 /api/v1/namespaces/watch-409/configmaps/e2e-watch-test-configmap-a 10c2ebfc-c08b-453f-a542-dc1bd75356dc 3514969 0 2020-05-11 21:03:44 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-11 21:04:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
May 11 21:04:24.381: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-409 /api/v1/namespaces/watch-409/configmaps/e2e-watch-test-configmap-b 2d811a06-4105-471d-b225-45c4d0c7dd45 3514999 0 2020-05-11 21:04:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-05-11 21:04:24 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May 11 21:04:24.381: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-409 /api/v1/namespaces/watch-409/configmaps/e2e-watch-test-configmap-b 2d811a06-4105-471d-b225-45c4d0c7dd45 3514999 0 2020-05-11 21:04:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-05-11 21:04:24 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
May 11 21:04:34.562: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-409 /api/v1/namespaces/watch-409/configmaps/e2e-watch-test-configmap-b 2d811a06-4105-471d-b225-45c4d0c7dd45 3515027 0 2020-05-11 21:04:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-05-11 21:04:24 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May 11 21:04:34.562: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-409 /api/v1/namespaces/watch-409/configmaps/e2e-watch-test-configmap-b 2d811a06-4105-471d-b225-45c4d0c7dd45 3515027 0 2020-05-11 21:04:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-05-11 21:04:24 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:04:44.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-409" for this suite.

• [SLOW TEST:60.698 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":55,"skipped":998,"failed":0}
SSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:04:44.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override command
May 11 21:04:45.536: INFO: Waiting up to 5m0s for pod "client-containers-e66b84e0-98e9-4ae4-a9e6-dc7b598a1cff" in namespace "containers-667" to be "Succeeded or Failed"
May 11 21:04:45.771: INFO: Pod "client-containers-e66b84e0-98e9-4ae4-a9e6-dc7b598a1cff": Phase="Pending", Reason="", readiness=false. Elapsed: 235.074751ms
May 11 21:04:48.090: INFO: Pod "client-containers-e66b84e0-98e9-4ae4-a9e6-dc7b598a1cff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.55362996s
May 11 21:04:50.101: INFO: Pod "client-containers-e66b84e0-98e9-4ae4-a9e6-dc7b598a1cff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.564491751s
May 11 21:04:52.167: INFO: Pod "client-containers-e66b84e0-98e9-4ae4-a9e6-dc7b598a1cff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.630770228s
STEP: Saw pod success
May 11 21:04:52.167: INFO: Pod "client-containers-e66b84e0-98e9-4ae4-a9e6-dc7b598a1cff" satisfied condition "Succeeded or Failed"
May 11 21:04:52.170: INFO: Trying to get logs from node kali-worker2 pod client-containers-e66b84e0-98e9-4ae4-a9e6-dc7b598a1cff container test-container: 
STEP: delete the pod
May 11 21:04:52.646: INFO: Waiting for pod client-containers-e66b84e0-98e9-4ae4-a9e6-dc7b598a1cff to disappear
May 11 21:04:52.687: INFO: Pod client-containers-e66b84e0-98e9-4ae4-a9e6-dc7b598a1cff no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:04:52.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-667" for this suite.

• [SLOW TEST:8.120 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":56,"skipped":1004,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:04:52.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:05:33.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9" for this suite.

• [SLOW TEST:41.076 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":57,"skipped":1023,"failed":0}
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:05:33.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-121ee09d-568d-406f-b862-75ec7bef8ade
STEP: Creating a pod to test consume secrets
May 11 21:05:34.623: INFO: Waiting up to 5m0s for pod "pod-secrets-54d0f4d2-a36f-426b-9bd5-7ee7a64aacc4" in namespace "secrets-2932" to be "Succeeded or Failed"
May 11 21:05:34.675: INFO: Pod "pod-secrets-54d0f4d2-a36f-426b-9bd5-7ee7a64aacc4": Phase="Pending", Reason="", readiness=false. Elapsed: 52.674114ms
May 11 21:05:36.849: INFO: Pod "pod-secrets-54d0f4d2-a36f-426b-9bd5-7ee7a64aacc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226621227s
May 11 21:05:38.910: INFO: Pod "pod-secrets-54d0f4d2-a36f-426b-9bd5-7ee7a64aacc4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.287123862s
May 11 21:05:40.970: INFO: Pod "pod-secrets-54d0f4d2-a36f-426b-9bd5-7ee7a64aacc4": Phase="Running", Reason="", readiness=true. Elapsed: 6.347230146s
May 11 21:05:42.973: INFO: Pod "pod-secrets-54d0f4d2-a36f-426b-9bd5-7ee7a64aacc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.350396219s
STEP: Saw pod success
May 11 21:05:42.973: INFO: Pod "pod-secrets-54d0f4d2-a36f-426b-9bd5-7ee7a64aacc4" satisfied condition "Succeeded or Failed"
May 11 21:05:42.975: INFO: Trying to get logs from node kali-worker pod pod-secrets-54d0f4d2-a36f-426b-9bd5-7ee7a64aacc4 container secret-volume-test: 
STEP: delete the pod
May 11 21:05:43.169: INFO: Waiting for pod pod-secrets-54d0f4d2-a36f-426b-9bd5-7ee7a64aacc4 to disappear
May 11 21:05:43.185: INFO: Pod pod-secrets-54d0f4d2-a36f-426b-9bd5-7ee7a64aacc4 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:05:43.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2932" for this suite.

• [SLOW TEST:9.637 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":58,"skipped":1023,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:05:43.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
May 11 21:05:58.194: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May 11 21:05:58.239: INFO: Pod pod-with-poststart-http-hook still exists
May 11 21:06:00.240: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May 11 21:06:00.243: INFO: Pod pod-with-poststart-http-hook still exists
May 11 21:06:02.240: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May 11 21:06:02.243: INFO: Pod pod-with-poststart-http-hook still exists
May 11 21:06:04.240: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May 11 21:06:04.264: INFO: Pod pod-with-poststart-http-hook still exists
May 11 21:06:06.240: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May 11 21:06:06.244: INFO: Pod pod-with-poststart-http-hook still exists
May 11 21:06:08.240: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May 11 21:06:08.243: INFO: Pod pod-with-poststart-http-hook still exists
May 11 21:06:10.240: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May 11 21:06:10.245: INFO: Pod pod-with-poststart-http-hook still exists
May 11 21:06:12.240: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May 11 21:06:12.359: INFO: Pod pod-with-poststart-http-hook still exists
May 11 21:06:14.240: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May 11 21:06:14.242: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:06:14.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1285" for this suite.

• [SLOW TEST:30.843 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":59,"skipped":1054,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:06:14.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override arguments
May 11 21:06:14.361: INFO: Waiting up to 5m0s for pod "client-containers-2ded08cb-6f10-4eff-aa1b-d352eedb88f6" in namespace "containers-1623" to be "Succeeded or Failed"
May 11 21:06:14.383: INFO: Pod "client-containers-2ded08cb-6f10-4eff-aa1b-d352eedb88f6": Phase="Pending", Reason="", readiness=false. Elapsed: 22.246169ms
May 11 21:06:16.627: INFO: Pod "client-containers-2ded08cb-6f10-4eff-aa1b-d352eedb88f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.266029764s
May 11 21:06:18.630: INFO: Pod "client-containers-2ded08cb-6f10-4eff-aa1b-d352eedb88f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.269217925s
May 11 21:06:20.700: INFO: Pod "client-containers-2ded08cb-6f10-4eff-aa1b-d352eedb88f6": Phase="Running", Reason="", readiness=true. Elapsed: 6.33891708s
May 11 21:06:22.713: INFO: Pod "client-containers-2ded08cb-6f10-4eff-aa1b-d352eedb88f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.351604596s
STEP: Saw pod success
May 11 21:06:22.713: INFO: Pod "client-containers-2ded08cb-6f10-4eff-aa1b-d352eedb88f6" satisfied condition "Succeeded or Failed"
May 11 21:06:23.109: INFO: Trying to get logs from node kali-worker pod client-containers-2ded08cb-6f10-4eff-aa1b-d352eedb88f6 container test-container: 
STEP: delete the pod
May 11 21:06:23.955: INFO: Waiting for pod client-containers-2ded08cb-6f10-4eff-aa1b-d352eedb88f6 to disappear
May 11 21:06:23.976: INFO: Pod client-containers-2ded08cb-6f10-4eff-aa1b-d352eedb88f6 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:06:23.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1623" for this suite.

• [SLOW TEST:9.780 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":60,"skipped":1072,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:06:24.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-bd2027d2-84bd-4d78-a4a9-299fab23971c
STEP: Creating a pod to test consume configMaps
May 11 21:06:25.148: INFO: Waiting up to 5m0s for pod "pod-configmaps-7adfc4f9-9310-45c6-88d0-9a7632cb8de3" in namespace "configmap-1916" to be "Succeeded or Failed"
May 11 21:06:25.491: INFO: Pod "pod-configmaps-7adfc4f9-9310-45c6-88d0-9a7632cb8de3": Phase="Pending", Reason="", readiness=false. Elapsed: 342.795921ms
May 11 21:06:27.495: INFO: Pod "pod-configmaps-7adfc4f9-9310-45c6-88d0-9a7632cb8de3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.346611388s
May 11 21:06:29.498: INFO: Pod "pod-configmaps-7adfc4f9-9310-45c6-88d0-9a7632cb8de3": Phase="Running", Reason="", readiness=true. Elapsed: 4.35001106s
May 11 21:06:31.545: INFO: Pod "pod-configmaps-7adfc4f9-9310-45c6-88d0-9a7632cb8de3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.396762569s
STEP: Saw pod success
May 11 21:06:31.545: INFO: Pod "pod-configmaps-7adfc4f9-9310-45c6-88d0-9a7632cb8de3" satisfied condition "Succeeded or Failed"
May 11 21:06:31.724: INFO: Trying to get logs from node kali-worker pod pod-configmaps-7adfc4f9-9310-45c6-88d0-9a7632cb8de3 container configmap-volume-test: 
STEP: delete the pod
May 11 21:06:31.783: INFO: Waiting for pod pod-configmaps-7adfc4f9-9310-45c6-88d0-9a7632cb8de3 to disappear
May 11 21:06:31.808: INFO: Pod pod-configmaps-7adfc4f9-9310-45c6-88d0-9a7632cb8de3 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:06:31.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1916" for this suite.

• [SLOW TEST:7.784 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":61,"skipped":1106,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:06:31.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: running the image docker.io/library/httpd:2.4.38-alpine
May 11 21:06:32.512: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3665'
May 11 21:06:32.633: INFO: stderr: ""
May 11 21:06:32.633: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423
May 11 21:06:32.683: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3665'
May 11 21:06:43.395: INFO: stderr: ""
May 11 21:06:43.395: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:06:43.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3665" for this suite.

• [SLOW TEST:11.599 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":275,"completed":62,"skipped":1137,"failed":0}
SSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:06:43.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service endpoint-test2 in namespace services-2608
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2608 to expose endpoints map[]
May 11 21:06:43.596: INFO: Get endpoints failed (2.66975ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
May 11 21:06:44.600: INFO: successfully validated that service endpoint-test2 in namespace services-2608 exposes endpoints map[] (1.006687541s elapsed)
STEP: Creating pod pod1 in namespace services-2608
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2608 to expose endpoints map[pod1:[80]]
May 11 21:06:48.728: INFO: successfully validated that service endpoint-test2 in namespace services-2608 exposes endpoints map[pod1:[80]] (4.120695507s elapsed)
STEP: Creating pod pod2 in namespace services-2608
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2608 to expose endpoints map[pod1:[80] pod2:[80]]
May 11 21:06:54.165: INFO: Unexpected endpoints: found map[e2f7274d-fabe-42c0-bc7c-649829c598ff:[80]], expected map[pod1:[80] pod2:[80]] (5.433638106s elapsed, will retry)
May 11 21:06:56.229: INFO: successfully validated that service endpoint-test2 in namespace services-2608 exposes endpoints map[pod1:[80] pod2:[80]] (7.497783835s elapsed)
STEP: Deleting pod pod1 in namespace services-2608
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2608 to expose endpoints map[pod2:[80]]
May 11 21:06:56.441: INFO: successfully validated that service endpoint-test2 in namespace services-2608 exposes endpoints map[pod2:[80]] (192.164565ms elapsed)
STEP: Deleting pod pod2 in namespace services-2608
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2608 to expose endpoints map[]
May 11 21:06:57.653: INFO: successfully validated that service endpoint-test2 in namespace services-2608 exposes endpoints map[] (1.206374938s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:06:58.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2608" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:15.131 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":275,"completed":63,"skipped":1144,"failed":0}
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:06:58.546: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 11 21:06:58.849: INFO: Waiting up to 5m0s for pod "downwardapi-volume-05e731d4-dd92-4dcc-af20-806b43057c80" in namespace "downward-api-6906" to be "Succeeded or Failed"
May 11 21:06:59.087: INFO: Pod "downwardapi-volume-05e731d4-dd92-4dcc-af20-806b43057c80": Phase="Pending", Reason="", readiness=false. Elapsed: 237.752341ms
May 11 21:07:01.259: INFO: Pod "downwardapi-volume-05e731d4-dd92-4dcc-af20-806b43057c80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.409684599s
May 11 21:07:03.510: INFO: Pod "downwardapi-volume-05e731d4-dd92-4dcc-af20-806b43057c80": Phase="Pending", Reason="", readiness=false. Elapsed: 4.660915381s
May 11 21:07:06.144: INFO: Pod "downwardapi-volume-05e731d4-dd92-4dcc-af20-806b43057c80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.294775969s
STEP: Saw pod success
May 11 21:07:06.144: INFO: Pod "downwardapi-volume-05e731d4-dd92-4dcc-af20-806b43057c80" satisfied condition "Succeeded or Failed"
May 11 21:07:06.146: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-05e731d4-dd92-4dcc-af20-806b43057c80 container client-container: 
STEP: delete the pod
May 11 21:07:06.656: INFO: Waiting for pod downwardapi-volume-05e731d4-dd92-4dcc-af20-806b43057c80 to disappear
May 11 21:07:06.986: INFO: Pod downwardapi-volume-05e731d4-dd92-4dcc-af20-806b43057c80 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:07:06.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6906" for this suite.

• [SLOW TEST:8.448 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":64,"skipped":1145,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:07:06.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting the proxy server
May 11 21:07:07.200: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:07:07.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6672" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":275,"completed":65,"skipped":1148,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:07:07.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
May 11 21:07:15.409: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:07:15.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8816" for this suite.

• [SLOW TEST:7.803 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":66,"skipped":1157,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:07:15.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 11 21:07:15.737: INFO: Waiting up to 5m0s for pod "downwardapi-volume-61365095-f1f1-4a47-ac55-409144c68341" in namespace "projected-6981" to be "Succeeded or Failed"
May 11 21:07:15.756: INFO: Pod "downwardapi-volume-61365095-f1f1-4a47-ac55-409144c68341": Phase="Pending", Reason="", readiness=false. Elapsed: 19.348472ms
May 11 21:07:17.809: INFO: Pod "downwardapi-volume-61365095-f1f1-4a47-ac55-409144c68341": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071875861s
May 11 21:07:19.812: INFO: Pod "downwardapi-volume-61365095-f1f1-4a47-ac55-409144c68341": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075532584s
May 11 21:07:21.880: INFO: Pod "downwardapi-volume-61365095-f1f1-4a47-ac55-409144c68341": Phase="Pending", Reason="", readiness=false. Elapsed: 6.143390974s
May 11 21:07:23.906: INFO: Pod "downwardapi-volume-61365095-f1f1-4a47-ac55-409144c68341": Phase="Running", Reason="", readiness=true. Elapsed: 8.169317509s
May 11 21:07:25.909: INFO: Pod "downwardapi-volume-61365095-f1f1-4a47-ac55-409144c68341": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.171874469s
STEP: Saw pod success
May 11 21:07:25.909: INFO: Pod "downwardapi-volume-61365095-f1f1-4a47-ac55-409144c68341" satisfied condition "Succeeded or Failed"
May 11 21:07:25.911: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-61365095-f1f1-4a47-ac55-409144c68341 container client-container: 
STEP: delete the pod
May 11 21:07:25.939: INFO: Waiting for pod downwardapi-volume-61365095-f1f1-4a47-ac55-409144c68341 to disappear
May 11 21:07:25.960: INFO: Pod downwardapi-volume-61365095-f1f1-4a47-ac55-409144c68341 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:07:25.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6981" for this suite.

• [SLOW TEST:10.552 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":67,"skipped":1158,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:07:26.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
May 11 21:07:35.553: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:07:36.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7450" for this suite.

• [SLOW TEST:10.113 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":68,"skipped":1219,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:07:36.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-8b325fd4-3824-4d53-857f-19d61c5b2384
STEP: Creating a pod to test consume configMaps
May 11 21:07:37.767: INFO: Waiting up to 5m0s for pod "pod-configmaps-5ee85ebb-750a-4d0b-93e8-f55293cc8be2" in namespace "configmap-7321" to be "Succeeded or Failed"
May 11 21:07:37.816: INFO: Pod "pod-configmaps-5ee85ebb-750a-4d0b-93e8-f55293cc8be2": Phase="Pending", Reason="", readiness=false. Elapsed: 48.784235ms
May 11 21:07:39.845: INFO: Pod "pod-configmaps-5ee85ebb-750a-4d0b-93e8-f55293cc8be2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077820964s
May 11 21:07:42.019: INFO: Pod "pod-configmaps-5ee85ebb-750a-4d0b-93e8-f55293cc8be2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.251988429s
May 11 21:07:44.030: INFO: Pod "pod-configmaps-5ee85ebb-750a-4d0b-93e8-f55293cc8be2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.263542892s
STEP: Saw pod success
May 11 21:07:44.031: INFO: Pod "pod-configmaps-5ee85ebb-750a-4d0b-93e8-f55293cc8be2" satisfied condition "Succeeded or Failed"
May 11 21:07:44.032: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-5ee85ebb-750a-4d0b-93e8-f55293cc8be2 container configmap-volume-test: 
STEP: delete the pod
May 11 21:07:44.090: INFO: Waiting for pod pod-configmaps-5ee85ebb-750a-4d0b-93e8-f55293cc8be2 to disappear
May 11 21:07:44.288: INFO: Pod pod-configmaps-5ee85ebb-750a-4d0b-93e8-f55293cc8be2 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:07:44.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7321" for this suite.

• [SLOW TEST:8.219 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":69,"skipped":1231,"failed":0}
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:07:44.464: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-5442
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 11 21:07:44.600: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 11 21:07:44.743: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 11 21:07:46.747: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 11 21:07:48.870: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 11 21:07:50.767: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 11 21:07:52.748: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:07:54.747: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:07:56.748: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:07:58.748: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:08:00.747: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:08:02.747: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:08:04.747: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 11 21:08:04.753: INFO: The status of Pod netserver-1 is Running (Ready = false)
May 11 21:08:06.756: INFO: The status of Pod netserver-1 is Running (Ready = false)
May 11 21:08:08.756: INFO: The status of Pod netserver-1 is Running (Ready = false)
May 11 21:08:10.757: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 11 21:08:16.811: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.97:8080/dial?request=hostname&protocol=http&host=10.244.2.56&port=8080&tries=1'] Namespace:pod-network-test-5442 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 11 21:08:16.811: INFO: >>> kubeConfig: /root/.kube/config
I0511 21:08:16.843249       7 log.go:172] (0xc0025422c0) (0xc002c026e0) Create stream
I0511 21:08:16.843286       7 log.go:172] (0xc0025422c0) (0xc002c026e0) Stream added, broadcasting: 1
I0511 21:08:16.844714       7 log.go:172] (0xc0025422c0) Reply frame received for 1
I0511 21:08:16.844736       7 log.go:172] (0xc0025422c0) (0xc002c02820) Create stream
I0511 21:08:16.844744       7 log.go:172] (0xc0025422c0) (0xc002c02820) Stream added, broadcasting: 3
I0511 21:08:16.845586       7 log.go:172] (0xc0025422c0) Reply frame received for 3
I0511 21:08:16.845622       7 log.go:172] (0xc0025422c0) (0xc001a0e460) Create stream
I0511 21:08:16.845636       7 log.go:172] (0xc0025422c0) (0xc001a0e460) Stream added, broadcasting: 5
I0511 21:08:16.846307       7 log.go:172] (0xc0025422c0) Reply frame received for 5
I0511 21:08:16.936831       7 log.go:172] (0xc0025422c0) Data frame received for 3
I0511 21:08:16.936855       7 log.go:172] (0xc002c02820) (3) Data frame handling
I0511 21:08:16.936868       7 log.go:172] (0xc002c02820) (3) Data frame sent
I0511 21:08:16.937217       7 log.go:172] (0xc0025422c0) Data frame received for 3
I0511 21:08:16.937278       7 log.go:172] (0xc002c02820) (3) Data frame handling
I0511 21:08:16.937323       7 log.go:172] (0xc0025422c0) Data frame received for 5
I0511 21:08:16.937364       7 log.go:172] (0xc001a0e460) (5) Data frame handling
I0511 21:08:16.938637       7 log.go:172] (0xc0025422c0) Data frame received for 1
I0511 21:08:16.938648       7 log.go:172] (0xc002c026e0) (1) Data frame handling
I0511 21:08:16.938655       7 log.go:172] (0xc002c026e0) (1) Data frame sent
I0511 21:08:16.938664       7 log.go:172] (0xc0025422c0) (0xc002c026e0) Stream removed, broadcasting: 1
I0511 21:08:16.938674       7 log.go:172] (0xc0025422c0) Go away received
I0511 21:08:16.938793       7 log.go:172] (0xc0025422c0) (0xc002c026e0) Stream removed, broadcasting: 1
I0511 21:08:16.938803       7 log.go:172] (0xc0025422c0) (0xc002c02820) Stream removed, broadcasting: 3
I0511 21:08:16.938808       7 log.go:172] (0xc0025422c0) (0xc001a0e460) Stream removed, broadcasting: 5
May 11 21:08:16.938: INFO: Waiting for responses: map[]
May 11 21:08:16.940: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.97:8080/dial?request=hostname&protocol=http&host=10.244.1.96&port=8080&tries=1'] Namespace:pod-network-test-5442 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 11 21:08:16.940: INFO: >>> kubeConfig: /root/.kube/config
I0511 21:08:16.958815       7 log.go:172] (0xc0025429a0) (0xc002c02e60) Create stream
I0511 21:08:16.958836       7 log.go:172] (0xc0025429a0) (0xc002c02e60) Stream added, broadcasting: 1
I0511 21:08:16.960437       7 log.go:172] (0xc0025429a0) Reply frame received for 1
I0511 21:08:16.960463       7 log.go:172] (0xc0025429a0) (0xc001a0e5a0) Create stream
I0511 21:08:16.960471       7 log.go:172] (0xc0025429a0) (0xc001a0e5a0) Stream added, broadcasting: 3
I0511 21:08:16.961086       7 log.go:172] (0xc0025429a0) Reply frame received for 3
I0511 21:08:16.961129       7 log.go:172] (0xc0025429a0) (0xc002aa6140) Create stream
I0511 21:08:16.961142       7 log.go:172] (0xc0025429a0) (0xc002aa6140) Stream added, broadcasting: 5
I0511 21:08:16.961691       7 log.go:172] (0xc0025429a0) Reply frame received for 5
I0511 21:08:17.027833       7 log.go:172] (0xc0025429a0) Data frame received for 3
I0511 21:08:17.027875       7 log.go:172] (0xc001a0e5a0) (3) Data frame handling
I0511 21:08:17.027926       7 log.go:172] (0xc001a0e5a0) (3) Data frame sent
I0511 21:08:17.028213       7 log.go:172] (0xc0025429a0) Data frame received for 3
I0511 21:08:17.028227       7 log.go:172] (0xc001a0e5a0) (3) Data frame handling
I0511 21:08:17.028248       7 log.go:172] (0xc0025429a0) Data frame received for 5
I0511 21:08:17.028266       7 log.go:172] (0xc002aa6140) (5) Data frame handling
I0511 21:08:17.029602       7 log.go:172] (0xc0025429a0) Data frame received for 1
I0511 21:08:17.029626       7 log.go:172] (0xc002c02e60) (1) Data frame handling
I0511 21:08:17.029637       7 log.go:172] (0xc002c02e60) (1) Data frame sent
I0511 21:08:17.029649       7 log.go:172] (0xc0025429a0) (0xc002c02e60) Stream removed, broadcasting: 1
I0511 21:08:17.029662       7 log.go:172] (0xc0025429a0) Go away received
I0511 21:08:17.029894       7 log.go:172] (0xc0025429a0) (0xc002c02e60) Stream removed, broadcasting: 1
I0511 21:08:17.029936       7 log.go:172] (0xc0025429a0) (0xc001a0e5a0) Stream removed, broadcasting: 3
I0511 21:08:17.029961       7 log.go:172] (0xc0025429a0) (0xc002aa6140) Stream removed, broadcasting: 5
May 11 21:08:17.030: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:08:17.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5442" for this suite.

• [SLOW TEST:32.572 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":70,"skipped":1237,"failed":0}
SSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:08:17.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4203 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4203;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4203 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4203;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4203.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4203.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4203.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4203.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4203.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4203.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4203.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4203.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4203.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4203.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4203.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4203.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4203.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 8.117.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.117.8_udp@PTR;check="$$(dig +tcp +noall +answer +search 8.117.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.117.8_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4203 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4203;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4203 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4203;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4203.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4203.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4203.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4203.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4203.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4203.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4203.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4203.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4203.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4203.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4203.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4203.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4203.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 8.117.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.117.8_udp@PTR;check="$$(dig +tcp +noall +answer +search 8.117.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.117.8_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May 11 21:08:36.074: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:36.077: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:36.079: INFO: Unable to read wheezy_udp@dns-test-service.dns-4203 from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:36.082: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4203 from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:36.083: INFO: Unable to read wheezy_udp@dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:36.086: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:36.088: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:36.090: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:36.115: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:36.118: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:36.121: INFO: Unable to read jessie_udp@dns-test-service.dns-4203 from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:36.123: INFO: Unable to read jessie_tcp@dns-test-service.dns-4203 from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:36.126: INFO: Unable to read jessie_udp@dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:36.128: INFO: Unable to read jessie_tcp@dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:36.131: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:36.134: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:36.152: INFO: Lookups using dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4203 wheezy_tcp@dns-test-service.dns-4203 wheezy_udp@dns-test-service.dns-4203.svc wheezy_tcp@dns-test-service.dns-4203.svc wheezy_udp@_http._tcp.dns-test-service.dns-4203.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4203.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4203 jessie_tcp@dns-test-service.dns-4203 jessie_udp@dns-test-service.dns-4203.svc jessie_tcp@dns-test-service.dns-4203.svc jessie_udp@_http._tcp.dns-test-service.dns-4203.svc jessie_tcp@_http._tcp.dns-test-service.dns-4203.svc]

May 11 21:08:41.225: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:41.229: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:41.234: INFO: Unable to read wheezy_udp@dns-test-service.dns-4203 from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:41.237: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4203 from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:41.241: INFO: Unable to read wheezy_udp@dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:41.244: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:41.246: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:41.248: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:41.276: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:41.278: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:41.280: INFO: Unable to read jessie_udp@dns-test-service.dns-4203 from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:41.282: INFO: Unable to read jessie_tcp@dns-test-service.dns-4203 from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:41.284: INFO: Unable to read jessie_udp@dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:41.286: INFO: Unable to read jessie_tcp@dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:41.288: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:41.290: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:41.356: INFO: Lookups using dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4203 wheezy_tcp@dns-test-service.dns-4203 wheezy_udp@dns-test-service.dns-4203.svc wheezy_tcp@dns-test-service.dns-4203.svc wheezy_udp@_http._tcp.dns-test-service.dns-4203.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4203.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4203 jessie_tcp@dns-test-service.dns-4203 jessie_udp@dns-test-service.dns-4203.svc jessie_tcp@dns-test-service.dns-4203.svc jessie_udp@_http._tcp.dns-test-service.dns-4203.svc jessie_tcp@_http._tcp.dns-test-service.dns-4203.svc]

May 11 21:08:46.156: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:46.158: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:46.160: INFO: Unable to read wheezy_udp@dns-test-service.dns-4203 from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:46.163: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4203 from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:46.165: INFO: Unable to read wheezy_udp@dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:46.167: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:46.169: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:46.171: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:46.607: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:46.609: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:46.611: INFO: Unable to read jessie_udp@dns-test-service.dns-4203 from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:46.613: INFO: Unable to read jessie_tcp@dns-test-service.dns-4203 from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:46.616: INFO: Unable to read jessie_udp@dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:46.618: INFO: Unable to read jessie_tcp@dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:46.621: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:46.623: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:46.638: INFO: Lookups using dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4203 wheezy_tcp@dns-test-service.dns-4203 wheezy_udp@dns-test-service.dns-4203.svc wheezy_tcp@dns-test-service.dns-4203.svc wheezy_udp@_http._tcp.dns-test-service.dns-4203.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4203.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4203 jessie_tcp@dns-test-service.dns-4203 jessie_udp@dns-test-service.dns-4203.svc jessie_tcp@dns-test-service.dns-4203.svc jessie_udp@_http._tcp.dns-test-service.dns-4203.svc jessie_tcp@_http._tcp.dns-test-service.dns-4203.svc]

May 11 21:08:51.158: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:51.161: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:51.164: INFO: Unable to read wheezy_udp@dns-test-service.dns-4203 from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:51.168: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4203 from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:51.171: INFO: Unable to read wheezy_udp@dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:51.174: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:51.177: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:51.180: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:51.201: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:51.203: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:51.206: INFO: Unable to read jessie_udp@dns-test-service.dns-4203 from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:51.210: INFO: Unable to read jessie_tcp@dns-test-service.dns-4203 from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:51.212: INFO: Unable to read jessie_udp@dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:51.215: INFO: Unable to read jessie_tcp@dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:51.219: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:51.221: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:51.244: INFO: Lookups using dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4203 wheezy_tcp@dns-test-service.dns-4203 wheezy_udp@dns-test-service.dns-4203.svc wheezy_tcp@dns-test-service.dns-4203.svc wheezy_udp@_http._tcp.dns-test-service.dns-4203.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4203.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4203 jessie_tcp@dns-test-service.dns-4203 jessie_udp@dns-test-service.dns-4203.svc jessie_tcp@dns-test-service.dns-4203.svc jessie_udp@_http._tcp.dns-test-service.dns-4203.svc jessie_tcp@_http._tcp.dns-test-service.dns-4203.svc]

May 11 21:08:56.157: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:56.161: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:56.165: INFO: Unable to read wheezy_udp@dns-test-service.dns-4203 from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:56.168: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4203 from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:56.171: INFO: Unable to read wheezy_udp@dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:56.174: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:56.178: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:56.181: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:56.202: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:56.204: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:56.207: INFO: Unable to read jessie_udp@dns-test-service.dns-4203 from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:56.210: INFO: Unable to read jessie_tcp@dns-test-service.dns-4203 from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:56.213: INFO: Unable to read jessie_udp@dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:56.216: INFO: Unable to read jessie_tcp@dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:56.218: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:56.221: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4203.svc from pod dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b: the server could not find the requested resource (get pods dns-test-61d38474-6317-42ba-95ad-638945b7b81b)
May 11 21:08:56.238: INFO: Lookups using dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4203 wheezy_tcp@dns-test-service.dns-4203 wheezy_udp@dns-test-service.dns-4203.svc wheezy_tcp@dns-test-service.dns-4203.svc wheezy_udp@_http._tcp.dns-test-service.dns-4203.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4203.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4203 jessie_tcp@dns-test-service.dns-4203 jessie_udp@dns-test-service.dns-4203.svc jessie_tcp@dns-test-service.dns-4203.svc jessie_udp@_http._tcp.dns-test-service.dns-4203.svc jessie_tcp@_http._tcp.dns-test-service.dns-4203.svc]

May 11 21:09:01.219: INFO: DNS probes using dns-4203/dns-test-61d38474-6317-42ba-95ad-638945b7b81b succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:09:02.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4203" for this suite.

• [SLOW TEST:45.819 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":71,"skipped":1245,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:09:02.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 21:09:02.968: INFO: Creating ReplicaSet my-hostname-basic-372b05aa-3a24-4946-a265-09ed25fea394
May 11 21:09:02.987: INFO: Pod name my-hostname-basic-372b05aa-3a24-4946-a265-09ed25fea394: Found 0 pods out of 1
May 11 21:09:07.994: INFO: Pod name my-hostname-basic-372b05aa-3a24-4946-a265-09ed25fea394: Found 1 pods out of 1
May 11 21:09:07.994: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-372b05aa-3a24-4946-a265-09ed25fea394" is running
May 11 21:09:07.999: INFO: Pod "my-hostname-basic-372b05aa-3a24-4946-a265-09ed25fea394-h8n6v" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 21:09:03 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 21:09:07 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 21:09:07 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 21:09:02 +0000 UTC Reason: Message:}])
May 11 21:09:07.999: INFO: Trying to dial the pod
May 11 21:09:13.009: INFO: Controller my-hostname-basic-372b05aa-3a24-4946-a265-09ed25fea394: Got expected result from replica 1 [my-hostname-basic-372b05aa-3a24-4946-a265-09ed25fea394-h8n6v]: "my-hostname-basic-372b05aa-3a24-4946-a265-09ed25fea394-h8n6v", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:09:13.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-88" for this suite.

• [SLOW TEST:10.159 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":72,"skipped":1259,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:09:13.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 21:09:13.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
May 11 21:09:16.072: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9996 create -f -'
May 11 21:09:19.798: INFO: stderr: ""
May 11 21:09:19.798: INFO: stdout: "e2e-test-crd-publish-openapi-1691-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
May 11 21:09:19.798: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9996 delete e2e-test-crd-publish-openapi-1691-crds test-cr'
May 11 21:09:19.901: INFO: stderr: ""
May 11 21:09:19.901: INFO: stdout: "e2e-test-crd-publish-openapi-1691-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
May 11 21:09:19.902: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9996 apply -f -'
May 11 21:09:20.162: INFO: stderr: ""
May 11 21:09:20.162: INFO: stdout: "e2e-test-crd-publish-openapi-1691-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
May 11 21:09:20.162: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9996 delete e2e-test-crd-publish-openapi-1691-crds test-cr'
May 11 21:09:20.447: INFO: stderr: ""
May 11 21:09:20.447: INFO: stdout: "e2e-test-crd-publish-openapi-1691-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
May 11 21:09:20.447: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1691-crds'
May 11 21:09:20.821: INFO: stderr: ""
May 11 21:09:20.821: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1691-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:09:24.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9996" for this suite.

• [SLOW TEST:11.250 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":73,"skipped":1264,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:09:24.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:09:35.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-252" for this suite.

• [SLOW TEST:11.385 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":74,"skipped":1291,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:09:35.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Starting the proxy
May 11 21:09:35.744: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix532620112/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:09:35.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9699" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":275,"completed":75,"skipped":1305,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:09:35.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-3927/configmap-test-f52fc45d-a9c5-4b13-8f92-e6540ded511c
STEP: Creating a pod to test consume configMaps
May 11 21:09:35.978: INFO: Waiting up to 5m0s for pod "pod-configmaps-e30ea9f0-bb0b-4bef-b72b-23b5e814039d" in namespace "configmap-3927" to be "Succeeded or Failed"
May 11 21:09:35.982: INFO: Pod "pod-configmaps-e30ea9f0-bb0b-4bef-b72b-23b5e814039d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006736ms
May 11 21:09:38.062: INFO: Pod "pod-configmaps-e30ea9f0-bb0b-4bef-b72b-23b5e814039d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08348388s
May 11 21:09:40.066: INFO: Pod "pod-configmaps-e30ea9f0-bb0b-4bef-b72b-23b5e814039d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087397154s
May 11 21:09:42.070: INFO: Pod "pod-configmaps-e30ea9f0-bb0b-4bef-b72b-23b5e814039d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.091763429s
STEP: Saw pod success
May 11 21:09:42.070: INFO: Pod "pod-configmaps-e30ea9f0-bb0b-4bef-b72b-23b5e814039d" satisfied condition "Succeeded or Failed"
May 11 21:09:42.074: INFO: Trying to get logs from node kali-worker pod pod-configmaps-e30ea9f0-bb0b-4bef-b72b-23b5e814039d container env-test: 
STEP: delete the pod
May 11 21:09:42.116: INFO: Waiting for pod pod-configmaps-e30ea9f0-bb0b-4bef-b72b-23b5e814039d to disappear
May 11 21:09:42.140: INFO: Pod pod-configmaps-e30ea9f0-bb0b-4bef-b72b-23b5e814039d no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:09:42.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3927" for this suite.

• [SLOW TEST:6.228 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":76,"skipped":1325,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:09:42.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 11 21:09:43.480: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 11 21:09:45.488: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828183, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828183, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828183, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828183, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:09:47.491: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828183, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828183, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828183, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828183, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 11 21:09:50.521: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:09:50.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9039" for this suite.
STEP: Destroying namespace "webhook-9039-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.692 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":77,"skipped":1344,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:09:50.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
May 11 21:09:51.180: INFO: Waiting up to 5m0s for pod "downward-api-3ab38770-7d17-405e-9ecd-0676bc24ec44" in namespace "downward-api-2632" to be "Succeeded or Failed"
May 11 21:09:51.251: INFO: Pod "downward-api-3ab38770-7d17-405e-9ecd-0676bc24ec44": Phase="Pending", Reason="", readiness=false. Elapsed: 70.99839ms
May 11 21:09:53.355: INFO: Pod "downward-api-3ab38770-7d17-405e-9ecd-0676bc24ec44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174318914s
May 11 21:09:55.358: INFO: Pod "downward-api-3ab38770-7d17-405e-9ecd-0676bc24ec44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.17745428s
STEP: Saw pod success
May 11 21:09:55.358: INFO: Pod "downward-api-3ab38770-7d17-405e-9ecd-0676bc24ec44" satisfied condition "Succeeded or Failed"
May 11 21:09:55.360: INFO: Trying to get logs from node kali-worker pod downward-api-3ab38770-7d17-405e-9ecd-0676bc24ec44 container dapi-container: 
STEP: delete the pod
May 11 21:09:55.386: INFO: Waiting for pod downward-api-3ab38770-7d17-405e-9ecd-0676bc24ec44 to disappear
May 11 21:09:55.579: INFO: Pod downward-api-3ab38770-7d17-405e-9ecd-0676bc24ec44 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:09:55.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2632" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":78,"skipped":1374,"failed":0}
SSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:09:55.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
May 11 21:09:56.006: INFO: Waiting up to 5m0s for pod "downward-api-72a399d9-885c-4c5f-bc16-ed76b9782c38" in namespace "downward-api-837" to be "Succeeded or Failed"
May 11 21:09:56.110: INFO: Pod "downward-api-72a399d9-885c-4c5f-bc16-ed76b9782c38": Phase="Pending", Reason="", readiness=false. Elapsed: 104.065508ms
May 11 21:09:58.446: INFO: Pod "downward-api-72a399d9-885c-4c5f-bc16-ed76b9782c38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.439925344s
May 11 21:10:00.487: INFO: Pod "downward-api-72a399d9-885c-4c5f-bc16-ed76b9782c38": Phase="Pending", Reason="", readiness=false. Elapsed: 4.481160633s
May 11 21:10:02.491: INFO: Pod "downward-api-72a399d9-885c-4c5f-bc16-ed76b9782c38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.484985053s
STEP: Saw pod success
May 11 21:10:02.491: INFO: Pod "downward-api-72a399d9-885c-4c5f-bc16-ed76b9782c38" satisfied condition "Succeeded or Failed"
May 11 21:10:02.494: INFO: Trying to get logs from node kali-worker2 pod downward-api-72a399d9-885c-4c5f-bc16-ed76b9782c38 container dapi-container: 
STEP: delete the pod
May 11 21:10:02.531: INFO: Waiting for pod downward-api-72a399d9-885c-4c5f-bc16-ed76b9782c38 to disappear
May 11 21:10:02.566: INFO: Pod downward-api-72a399d9-885c-4c5f-bc16-ed76b9782c38 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:10:02.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-837" for this suite.

• [SLOW TEST:6.911 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":79,"skipped":1382,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:10:02.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
May 11 21:10:02.716: INFO: Waiting up to 5m0s for pod "pod-76d83bf3-4e3c-49e7-9e8c-d00f66f8a9a8" in namespace "emptydir-8478" to be "Succeeded or Failed"
May 11 21:10:02.744: INFO: Pod "pod-76d83bf3-4e3c-49e7-9e8c-d00f66f8a9a8": Phase="Pending", Reason="", readiness=false. Elapsed: 27.159223ms
May 11 21:10:04.748: INFO: Pod "pod-76d83bf3-4e3c-49e7-9e8c-d00f66f8a9a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03141522s
May 11 21:10:06.751: INFO: Pod "pod-76d83bf3-4e3c-49e7-9e8c-d00f66f8a9a8": Phase="Running", Reason="", readiness=true. Elapsed: 4.03507496s
May 11 21:10:08.755: INFO: Pod "pod-76d83bf3-4e3c-49e7-9e8c-d00f66f8a9a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038830503s
STEP: Saw pod success
May 11 21:10:08.755: INFO: Pod "pod-76d83bf3-4e3c-49e7-9e8c-d00f66f8a9a8" satisfied condition "Succeeded or Failed"
May 11 21:10:08.758: INFO: Trying to get logs from node kali-worker2 pod pod-76d83bf3-4e3c-49e7-9e8c-d00f66f8a9a8 container test-container: 
STEP: delete the pod
May 11 21:10:08.824: INFO: Waiting for pod pod-76d83bf3-4e3c-49e7-9e8c-d00f66f8a9a8 to disappear
May 11 21:10:08.829: INFO: Pod pod-76d83bf3-4e3c-49e7-9e8c-d00f66f8a9a8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:10:08.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8478" for this suite.

• [SLOW TEST:6.261 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":80,"skipped":1444,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:10:08.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:10:13.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4110" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":81,"skipped":1462,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:10:13.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-582da8c3-ed29-43ab-aad9-d2301c11c714
STEP: Creating a pod to test consume configMaps
May 11 21:10:13.542: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-67770953-4967-4749-92de-86404d6958d5" in namespace "projected-5624" to be "Succeeded or Failed"
May 11 21:10:13.561: INFO: Pod "pod-projected-configmaps-67770953-4967-4749-92de-86404d6958d5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.718958ms
May 11 21:10:15.894: INFO: Pod "pod-projected-configmaps-67770953-4967-4749-92de-86404d6958d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.352015962s
May 11 21:10:17.898: INFO: Pod "pod-projected-configmaps-67770953-4967-4749-92de-86404d6958d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.355462854s
STEP: Saw pod success
May 11 21:10:17.898: INFO: Pod "pod-projected-configmaps-67770953-4967-4749-92de-86404d6958d5" satisfied condition "Succeeded or Failed"
May 11 21:10:17.900: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-67770953-4967-4749-92de-86404d6958d5 container projected-configmap-volume-test: 
STEP: delete the pod
May 11 21:10:18.013: INFO: Waiting for pod pod-projected-configmaps-67770953-4967-4749-92de-86404d6958d5 to disappear
May 11 21:10:18.022: INFO: Pod pod-projected-configmaps-67770953-4967-4749-92de-86404d6958d5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:10:18.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5624" for this suite.

• [SLOW TEST:5.025 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":82,"skipped":1474,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:10:18.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-3655
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-3655
I0511 21:10:18.291232       7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-3655, replica count: 2
I0511 21:10:21.341690       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0511 21:10:24.341890       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May 11 21:10:24.341: INFO: Creating new exec pod
May 11 21:10:31.395: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-3655 execpodfzg2w -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
May 11 21:10:31.612: INFO: stderr: "I0511 21:10:31.528209    1148 log.go:172] (0xc000a854a0) (0xc0009ea6e0) Create stream\nI0511 21:10:31.528280    1148 log.go:172] (0xc000a854a0) (0xc0009ea6e0) Stream added, broadcasting: 1\nI0511 21:10:31.534854    1148 log.go:172] (0xc000a854a0) Reply frame received for 1\nI0511 21:10:31.534978    1148 log.go:172] (0xc000a854a0) (0xc0006b55e0) Create stream\nI0511 21:10:31.535042    1148 log.go:172] (0xc000a854a0) (0xc0006b55e0) Stream added, broadcasting: 3\nI0511 21:10:31.539707    1148 log.go:172] (0xc000a854a0) Reply frame received for 3\nI0511 21:10:31.539750    1148 log.go:172] (0xc000a854a0) (0xc000532a00) Create stream\nI0511 21:10:31.539763    1148 log.go:172] (0xc000a854a0) (0xc000532a00) Stream added, broadcasting: 5\nI0511 21:10:31.540667    1148 log.go:172] (0xc000a854a0) Reply frame received for 5\nI0511 21:10:31.608494    1148 log.go:172] (0xc000a854a0) Data frame received for 5\nI0511 21:10:31.608613    1148 log.go:172] (0xc000532a00) (5) Data frame handling\nI0511 21:10:31.608635    1148 log.go:172] (0xc000532a00) (5) Data frame sent\nI0511 21:10:31.608649    1148 log.go:172] (0xc000a854a0) Data frame received for 5\nI0511 21:10:31.608655    1148 log.go:172] (0xc000532a00) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0511 21:10:31.608681    1148 log.go:172] (0xc000a854a0) Data frame received for 3\nI0511 21:10:31.608709    1148 log.go:172] (0xc0006b55e0) (3) Data frame handling\nI0511 21:10:31.608732    1148 log.go:172] (0xc000532a00) (5) Data frame sent\nI0511 21:10:31.608747    1148 log.go:172] (0xc000a854a0) Data frame received for 5\nI0511 21:10:31.608752    1148 log.go:172] (0xc000532a00) (5) Data frame handling\nI0511 21:10:31.609061    1148 log.go:172] (0xc000a854a0) Data frame received for 1\nI0511 21:10:31.609084    1148 log.go:172] (0xc0009ea6e0) (1) Data frame handling\nI0511 21:10:31.609409    1148 log.go:172] (0xc0009ea6e0) (1) Data frame sent\nI0511 21:10:31.609442    1148 log.go:172] (0xc000a854a0) (0xc0009ea6e0) Stream removed, broadcasting: 1\nI0511 21:10:31.609472    1148 log.go:172] (0xc000a854a0) Go away received\nI0511 21:10:31.609805    1148 log.go:172] (0xc000a854a0) (0xc0009ea6e0) Stream removed, broadcasting: 1\nI0511 21:10:31.609821    1148 log.go:172] (0xc000a854a0) (0xc0006b55e0) Stream removed, broadcasting: 3\nI0511 21:10:31.609830    1148 log.go:172] (0xc000a854a0) (0xc000532a00) Stream removed, broadcasting: 5\n"
May 11 21:10:31.613: INFO: stdout: ""
May 11 21:10:31.613: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-3655 execpodfzg2w -- /bin/sh -x -c nc -zv -t -w 2 10.110.205.77 80'
May 11 21:10:31.816: INFO: stderr: "I0511 21:10:31.736725    1169 log.go:172] (0xc0000e8d10) (0xc0006b9540) Create stream\nI0511 21:10:31.736815    1169 log.go:172] (0xc0000e8d10) (0xc0006b9540) Stream added, broadcasting: 1\nI0511 21:10:31.739955    1169 log.go:172] (0xc0000e8d10) Reply frame received for 1\nI0511 21:10:31.739989    1169 log.go:172] (0xc0000e8d10) (0xc0006b95e0) Create stream\nI0511 21:10:31.739999    1169 log.go:172] (0xc0000e8d10) (0xc0006b95e0) Stream added, broadcasting: 3\nI0511 21:10:31.740815    1169 log.go:172] (0xc0000e8d10) Reply frame received for 3\nI0511 21:10:31.740868    1169 log.go:172] (0xc0000e8d10) (0xc0006275e0) Create stream\nI0511 21:10:31.740895    1169 log.go:172] (0xc0000e8d10) (0xc0006275e0) Stream added, broadcasting: 5\nI0511 21:10:31.742139    1169 log.go:172] (0xc0000e8d10) Reply frame received for 5\nI0511 21:10:31.809931    1169 log.go:172] (0xc0000e8d10) Data frame received for 3\nI0511 21:10:31.809973    1169 log.go:172] (0xc0006b95e0) (3) Data frame handling\nI0511 21:10:31.810008    1169 log.go:172] (0xc0000e8d10) Data frame received for 5\nI0511 21:10:31.810036    1169 log.go:172] (0xc0006275e0) (5) Data frame handling\nI0511 21:10:31.810059    1169 log.go:172] (0xc0006275e0) (5) Data frame sent\nI0511 21:10:31.810070    1169 log.go:172] (0xc0000e8d10) Data frame received for 5\nI0511 21:10:31.810080    1169 log.go:172] (0xc0006275e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.110.205.77 80\nConnection to 10.110.205.77 80 port [tcp/http] succeeded!\nI0511 21:10:31.811268    1169 log.go:172] (0xc0000e8d10) Data frame received for 1\nI0511 21:10:31.811291    1169 log.go:172] (0xc0006b9540) (1) Data frame handling\nI0511 21:10:31.811304    1169 log.go:172] (0xc0006b9540) (1) Data frame sent\nI0511 21:10:31.811320    1169 log.go:172] (0xc0000e8d10) (0xc0006b9540) Stream removed, broadcasting: 1\nI0511 21:10:31.811337    1169 log.go:172] (0xc0000e8d10) Go away received\nI0511 21:10:31.811837    1169 log.go:172] (0xc0000e8d10) (0xc0006b9540) Stream removed, broadcasting: 1\nI0511 21:10:31.811870    1169 log.go:172] (0xc0000e8d10) (0xc0006b95e0) Stream removed, broadcasting: 3\nI0511 21:10:31.811883    1169 log.go:172] (0xc0000e8d10) (0xc0006275e0) Stream removed, broadcasting: 5\n"
May 11 21:10:31.816: INFO: stdout: ""
May 11 21:10:31.816: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:10:31.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3655" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:13.801 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":83,"skipped":1489,"failed":0}
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:10:31.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 21:10:32.427: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"885412f5-9193-4187-8e6e-c45f0769a196", Controller:(*bool)(0xc0037fcc62), BlockOwnerDeletion:(*bool)(0xc0037fcc63)}}
May 11 21:10:32.551: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"2377cd56-2512-4671-b8c1-9fc4edb4c80d", Controller:(*bool)(0xc0043f6fb2), BlockOwnerDeletion:(*bool)(0xc0043f6fb3)}}
May 11 21:10:32.598: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"a5229b6e-0541-4c40-9a99-6189f448c804", Controller:(*bool)(0xc0043f72e6), BlockOwnerDeletion:(*bool)(0xc0043f72e7)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:10:37.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-145" for this suite.

• [SLOW TEST:5.829 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":84,"skipped":1491,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:10:37.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating cluster-info
May 11 21:10:37.916: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config cluster-info'
May 11 21:10:38.052: INFO: stderr: ""
May 11 21:10:38.052: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32772\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32772/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:10:38.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3402" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":275,"completed":85,"skipped":1515,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:10:38.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test env composition
May 11 21:10:38.167: INFO: Waiting up to 5m0s for pod "var-expansion-4ad4701c-a7c4-4b30-a3ad-e0432c8dadfa" in namespace "var-expansion-920" to be "Succeeded or Failed"
May 11 21:10:38.559: INFO: Pod "var-expansion-4ad4701c-a7c4-4b30-a3ad-e0432c8dadfa": Phase="Pending", Reason="", readiness=false. Elapsed: 392.145816ms
May 11 21:10:40.562: INFO: Pod "var-expansion-4ad4701c-a7c4-4b30-a3ad-e0432c8dadfa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.395496392s
May 11 21:10:42.566: INFO: Pod "var-expansion-4ad4701c-a7c4-4b30-a3ad-e0432c8dadfa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.398618724s
STEP: Saw pod success
May 11 21:10:42.566: INFO: Pod "var-expansion-4ad4701c-a7c4-4b30-a3ad-e0432c8dadfa" satisfied condition "Succeeded or Failed"
May 11 21:10:42.568: INFO: Trying to get logs from node kali-worker pod var-expansion-4ad4701c-a7c4-4b30-a3ad-e0432c8dadfa container dapi-container: 
STEP: delete the pod
May 11 21:10:42.659: INFO: Waiting for pod var-expansion-4ad4701c-a7c4-4b30-a3ad-e0432c8dadfa to disappear
May 11 21:10:42.744: INFO: Pod var-expansion-4ad4701c-a7c4-4b30-a3ad-e0432c8dadfa no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:10:42.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-920" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":86,"skipped":1524,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:10:42.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-280/configmap-test-da220e45-6e6b-4c81-817d-fb6c8abe831c
STEP: Creating a pod to test consume configMaps
May 11 21:10:42.922: INFO: Waiting up to 5m0s for pod "pod-configmaps-f4af8bda-ee56-41f5-b7a5-369f41fa3bc4" in namespace "configmap-280" to be "Succeeded or Failed"
May 11 21:10:42.927: INFO: Pod "pod-configmaps-f4af8bda-ee56-41f5-b7a5-369f41fa3bc4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.166515ms
May 11 21:10:45.278: INFO: Pod "pod-configmaps-f4af8bda-ee56-41f5-b7a5-369f41fa3bc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.355809325s
May 11 21:10:47.282: INFO: Pod "pod-configmaps-f4af8bda-ee56-41f5-b7a5-369f41fa3bc4": Phase="Running", Reason="", readiness=true. Elapsed: 4.359852538s
May 11 21:10:49.285: INFO: Pod "pod-configmaps-f4af8bda-ee56-41f5-b7a5-369f41fa3bc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.363084577s
STEP: Saw pod success
May 11 21:10:49.285: INFO: Pod "pod-configmaps-f4af8bda-ee56-41f5-b7a5-369f41fa3bc4" satisfied condition "Succeeded or Failed"
May 11 21:10:49.287: INFO: Trying to get logs from node kali-worker pod pod-configmaps-f4af8bda-ee56-41f5-b7a5-369f41fa3bc4 container env-test: 
STEP: delete the pod
May 11 21:10:49.312: INFO: Waiting for pod pod-configmaps-f4af8bda-ee56-41f5-b7a5-369f41fa3bc4 to disappear
May 11 21:10:49.316: INFO: Pod pod-configmaps-f4af8bda-ee56-41f5-b7a5-369f41fa3bc4 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:10:49.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-280" for this suite.

• [SLOW TEST:6.571 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":87,"skipped":1539,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:10:49.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-e50d0f4b-b2e9-4013-a6fe-4f7e53112ff6
STEP: Creating a pod to test consume configMaps
May 11 21:10:49.554: INFO: Waiting up to 5m0s for pod "pod-configmaps-1c4a8787-5957-45e8-a9a0-1ea241c65ce7" in namespace "configmap-332" to be "Succeeded or Failed"
May 11 21:10:49.556: INFO: Pod "pod-configmaps-1c4a8787-5957-45e8-a9a0-1ea241c65ce7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221016ms
May 11 21:10:51.559: INFO: Pod "pod-configmaps-1c4a8787-5957-45e8-a9a0-1ea241c65ce7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005551987s
May 11 21:10:53.563: INFO: Pod "pod-configmaps-1c4a8787-5957-45e8-a9a0-1ea241c65ce7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009291558s
May 11 21:10:55.567: INFO: Pod "pod-configmaps-1c4a8787-5957-45e8-a9a0-1ea241c65ce7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013123152s
STEP: Saw pod success
May 11 21:10:55.567: INFO: Pod "pod-configmaps-1c4a8787-5957-45e8-a9a0-1ea241c65ce7" satisfied condition "Succeeded or Failed"
May 11 21:10:55.569: INFO: Trying to get logs from node kali-worker pod pod-configmaps-1c4a8787-5957-45e8-a9a0-1ea241c65ce7 container configmap-volume-test: 
STEP: delete the pod
May 11 21:10:55.588: INFO: Waiting for pod pod-configmaps-1c4a8787-5957-45e8-a9a0-1ea241c65ce7 to disappear
May 11 21:10:55.618: INFO: Pod pod-configmaps-1c4a8787-5957-45e8-a9a0-1ea241c65ce7 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:10:55.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-332" for this suite.

• [SLOW TEST:6.301 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":88,"skipped":1551,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:10:55.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 11 21:10:56.377: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 11 21:10:58.475: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828256, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828256, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828256, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828256, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:11:00.493: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828256, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828256, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828256, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828256, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 11 21:11:04.501: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:11:04.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-168" for this suite.
STEP: Destroying namespace "webhook-168-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.166 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":89,"skipped":1561,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:11:04.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a Namespace
STEP: patching the Namespace
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:11:04.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-1769" for this suite.
STEP: Destroying namespace "nspatchtest-5480896e-f551-4a20-bba5-99efd39133a2-5193" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":90,"skipped":1608,"failed":0}
S
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:11:05.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0511 21:11:46.307388       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May 11 21:11:46.307: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:11:46.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8119" for this suite.

• [SLOW TEST:41.288 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":91,"skipped":1609,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:11:46.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
May 11 21:11:46.427: INFO: Pod name pod-release: Found 0 pods out of 1
May 11 21:11:51.439: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:11:51.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9072" for this suite.

• [SLOW TEST:6.419 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":92,"skipped":1635,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:11:52.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 21:11:54.279: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
May 11 21:11:54.559: INFO: Number of nodes with available pods: 0
May 11 21:11:54.559: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
May 11 21:11:54.882: INFO: Number of nodes with available pods: 0
May 11 21:11:54.882: INFO: Node kali-worker is running more than one daemon pod
May 11 21:11:56.094: INFO: Number of nodes with available pods: 0
May 11 21:11:56.094: INFO: Node kali-worker is running more than one daemon pod
May 11 21:11:57.080: INFO: Number of nodes with available pods: 0
May 11 21:11:57.080: INFO: Node kali-worker is running more than one daemon pod
May 11 21:11:58.002: INFO: Number of nodes with available pods: 0
May 11 21:11:58.003: INFO: Node kali-worker is running more than one daemon pod
May 11 21:11:58.961: INFO: Number of nodes with available pods: 0
May 11 21:11:58.961: INFO: Node kali-worker is running more than one daemon pod
May 11 21:11:59.912: INFO: Number of nodes with available pods: 0
May 11 21:11:59.912: INFO: Node kali-worker is running more than one daemon pod
May 11 21:12:01.230: INFO: Number of nodes with available pods: 1
May 11 21:12:01.230: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
May 11 21:12:01.762: INFO: Number of nodes with available pods: 1
May 11 21:12:01.762: INFO: Number of running nodes: 0, number of available pods: 1
May 11 21:12:02.819: INFO: Number of nodes with available pods: 0
May 11 21:12:02.819: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
May 11 21:12:03.332: INFO: Number of nodes with available pods: 0
May 11 21:12:03.332: INFO: Node kali-worker is running more than one daemon pod
May 11 21:12:04.336: INFO: Number of nodes with available pods: 0
May 11 21:12:04.336: INFO: Node kali-worker is running more than one daemon pod
May 11 21:12:05.337: INFO: Number of nodes with available pods: 0
May 11 21:12:05.337: INFO: Node kali-worker is running more than one daemon pod
May 11 21:12:06.336: INFO: Number of nodes with available pods: 0
May 11 21:12:06.336: INFO: Node kali-worker is running more than one daemon pod
May 11 21:12:07.337: INFO: Number of nodes with available pods: 0
May 11 21:12:07.337: INFO: Node kali-worker is running more than one daemon pod
May 11 21:12:08.336: INFO: Number of nodes with available pods: 0
May 11 21:12:08.336: INFO: Node kali-worker is running more than one daemon pod
May 11 21:12:09.460: INFO: Number of nodes with available pods: 0
May 11 21:12:09.460: INFO: Node kali-worker is running more than one daemon pod
May 11 21:12:10.634: INFO: Number of nodes with available pods: 0
May 11 21:12:10.634: INFO: Node kali-worker is running more than one daemon pod
May 11 21:12:11.336: INFO: Number of nodes with available pods: 0
May 11 21:12:11.336: INFO: Node kali-worker is running more than one daemon pod
May 11 21:12:12.350: INFO: Number of nodes with available pods: 0
May 11 21:12:12.350: INFO: Node kali-worker is running more than one daemon pod
May 11 21:12:13.336: INFO: Number of nodes with available pods: 0
May 11 21:12:13.336: INFO: Node kali-worker is running more than one daemon pod
May 11 21:12:14.342: INFO: Number of nodes with available pods: 0
May 11 21:12:14.342: INFO: Node kali-worker is running more than one daemon pod
May 11 21:12:15.336: INFO: Number of nodes with available pods: 0
May 11 21:12:15.337: INFO: Node kali-worker is running more than one daemon pod
May 11 21:12:16.387: INFO: Number of nodes with available pods: 0
May 11 21:12:16.387: INFO: Node kali-worker is running more than one daemon pod
May 11 21:12:17.336: INFO: Number of nodes with available pods: 1
May 11 21:12:17.336: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3617, will wait for the garbage collector to delete the pods
May 11 21:12:17.399: INFO: Deleting DaemonSet.extensions daemon-set took: 5.117765ms
May 11 21:12:17.699: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.237206ms
May 11 21:12:23.802: INFO: Number of nodes with available pods: 0
May 11 21:12:23.802: INFO: Number of running nodes: 0, number of available pods: 0
May 11 21:12:23.804: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3617/daemonsets","resourceVersion":"3517720"},"items":null}

May 11 21:12:23.806: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3617/pods","resourceVersion":"3517720"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:12:23.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3617" for this suite.

• [SLOW TEST:31.109 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":93,"skipped":1695,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:12:23.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating api versions
May 11 21:12:23.926: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config api-versions'
May 11 21:12:24.170: INFO: stderr: ""
May 11 21:12:24.171: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:12:24.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9578" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":275,"completed":94,"skipped":1705,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:12:24.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288
STEP: creating an pod
May 11 21:12:24.310: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-2749 -- logs-generator --log-lines-total 100 --run-duration 20s'
May 11 21:12:24.455: INFO: stderr: ""
May 11 21:12:24.455: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Waiting for log generator to start.
May 11 21:12:24.455: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
May 11 21:12:24.455: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-2749" to be "running and ready, or succeeded"
May 11 21:12:24.477: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 22.095583ms
May 11 21:12:26.480: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02537292s
May 11 21:12:28.484: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029130435s
May 11 21:12:30.566: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.11106713s
May 11 21:12:30.566: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
May 11 21:12:30.566: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
May 11 21:12:30.566: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2749'
May 11 21:12:30.761: INFO: stderr: ""
May 11 21:12:30.761: INFO: stdout: "I0511 21:12:29.198238       1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/r6cg 427\nI0511 21:12:29.398419       1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/fr7 348\nI0511 21:12:29.598332       1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/2l7 336\nI0511 21:12:29.798316       1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/hl76 362\nI0511 21:12:29.998384       1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/nrxr 490\nI0511 21:12:30.198402       1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/77qs 598\nI0511 21:12:30.398448       1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/hgtd 310\nI0511 21:12:30.598417       1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/kprm 599\n"
STEP: limiting log lines
May 11 21:12:30.761: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2749 --tail=1'
May 11 21:12:30.875: INFO: stderr: ""
May 11 21:12:30.875: INFO: stdout: "I0511 21:12:30.798382       1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/qmf 500\n"
May 11 21:12:30.875: INFO: got output "I0511 21:12:30.798382       1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/qmf 500\n"
STEP: limiting log bytes
May 11 21:12:30.875: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2749 --limit-bytes=1'
May 11 21:12:30.969: INFO: stderr: ""
May 11 21:12:30.969: INFO: stdout: "I"
May 11 21:12:30.969: INFO: got output "I"
STEP: exposing timestamps
May 11 21:12:30.970: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2749 --tail=1 --timestamps'
May 11 21:12:31.088: INFO: stderr: ""
May 11 21:12:31.088: INFO: stdout: "2020-05-11T21:12:30.99850634Z I0511 21:12:30.998348       1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/bwd 217\n"
May 11 21:12:31.088: INFO: got output "2020-05-11T21:12:30.99850634Z I0511 21:12:30.998348       1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/bwd 217\n"
STEP: restricting to a time range
May 11 21:12:33.588: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2749 --since=1s'
May 11 21:12:33.692: INFO: stderr: ""
May 11 21:12:33.692: INFO: stdout: "I0511 21:12:32.798386       1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/qpl 367\nI0511 21:12:32.998390       1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/zlmg 235\nI0511 21:12:33.198389       1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/5bmn 285\nI0511 21:12:33.398372       1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/twqs 347\nI0511 21:12:33.598364       1 logs_generator.go:76] 22 POST /api/v1/namespaces/kube-system/pods/dgj 575\n"
May 11 21:12:33.692: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2749 --since=24h'
May 11 21:12:33.821: INFO: stderr: ""
May 11 21:12:33.821: INFO: stdout: "I0511 21:12:29.198238       1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/r6cg 427\nI0511 21:12:29.398419       1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/fr7 348\nI0511 21:12:29.598332       1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/2l7 336\nI0511 21:12:29.798316       1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/hl76 362\nI0511 21:12:29.998384       1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/nrxr 490\nI0511 21:12:30.198402       1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/77qs 598\nI0511 21:12:30.398448       1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/hgtd 310\nI0511 21:12:30.598417       1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/kprm 599\nI0511 21:12:30.798382       1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/qmf 500\nI0511 21:12:30.998348       1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/bwd 217\nI0511 21:12:31.198376       1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/tgqz 245\nI0511 21:12:31.398380       1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/58g 319\nI0511 21:12:31.598433       1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/qgp4 417\nI0511 21:12:31.798393       1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/t4ck 249\nI0511 21:12:31.998440       1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/bc5n 589\nI0511 21:12:32.198404       1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/bgbv 542\nI0511 21:12:32.398373       1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/jfvw 360\nI0511 21:12:32.598383       1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/zbqf 571\nI0511 21:12:32.798386       1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/qpl 367\nI0511 21:12:32.998390       1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/zlmg 235\nI0511 21:12:33.198389       1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/5bmn 285\nI0511 21:12:33.398372       1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/twqs 347\nI0511 21:12:33.598364       1 logs_generator.go:76] 22 POST /api/v1/namespaces/kube-system/pods/dgj 575\nI0511 21:12:33.798373       1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/sssx 412\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294
May 11 21:12:33.822: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-2749'
May 11 21:12:36.122: INFO: stderr: ""
May 11 21:12:36.122: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:12:36.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2749" for this suite.

• [SLOW TEST:11.980 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":275,"completed":95,"skipped":1732,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:12:36.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 21:12:36.280: INFO: Creating deployment "test-recreate-deployment"
May 11 21:12:36.380: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
May 11 21:12:36.406: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
May 11 21:12:38.412: INFO: Waiting deployment "test-recreate-deployment" to complete
May 11 21:12:38.416: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828356, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828356, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828356, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828356, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:12:40.418: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
May 11 21:12:40.424: INFO: Updating deployment test-recreate-deployment
May 11 21:12:40.424: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
May 11 21:12:41.771: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-3180 /apis/apps/v1/namespaces/deployment-3180/deployments/test-recreate-deployment 3712a7f5-405a-4718-b5c2-1eaef7e7aae3 3517859 2 2020-05-11 21:12:36 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-05-11 21:12:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-11 21:12:41 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0036df368  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-11 21:12:41 +0000 UTC,LastTransitionTime:2020-05-11 21:12:41 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-05-11 21:12:41 +0000 UTC,LastTransitionTime:2020-05-11 21:12:36 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

May 11 21:12:42.334: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7  deployment-3180 /apis/apps/v1/namespaces/deployment-3180/replicasets/test-recreate-deployment-d5667d9c7 8e9f5cb8-6460-415a-aa2f-fa22d77f9f99 3517857 1 2020-05-11 21:12:40 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 3712a7f5-405a-4718-b5c2-1eaef7e7aae3 0xc0036fe810 0xc0036fe811}] []  [{kube-controller-manager Update apps/v1 2020-05-11 21:12:41 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 55 49 50 97 55 102 53 45 52 48 53 97 45 52 55 49 56 45 98 53 99 50 45 49 101 97 101 102 55 101 55 97 97 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0036fe888  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May 11 21:12:42.334: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
May 11 21:12:42.334: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-74d98b5f7c  deployment-3180 /apis/apps/v1/namespaces/deployment-3180/replicasets/test-recreate-deployment-74d98b5f7c 63f2b930-1fe2-405e-9cb8-5f95d6d727fb 3517848 2 2020-05-11 21:12:36 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 3712a7f5-405a-4718-b5c2-1eaef7e7aae3 0xc0036fe717 0xc0036fe718}] []  [{kube-controller-manager Update apps/v1 2020-05-11 21:12:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 55 49 50 97 55 102 53 45 52 48 53 97 45 52 55 49 56 45 98 53 99 50 45 49 101 97 101 102 55 101 55 97 97 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 74d98b5f7c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0036fe7a8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May 11 21:12:42.338: INFO: Pod "test-recreate-deployment-d5667d9c7-9jkxc" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-9jkxc test-recreate-deployment-d5667d9c7- deployment-3180 /api/v1/namespaces/deployment-3180/pods/test-recreate-deployment-d5667d9c7-9jkxc ac393816-6245-47c2-bd16-b677bf0bd00d 3517860 0 2020-05-11 21:12:41 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 8e9f5cb8-6460-415a-aa2f-fa22d77f9f99 0xc0036fed60 0xc0036fed61}] []  [{kube-controller-manager Update v1 2020-05-11 21:12:41 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 101 57 102 53 99 98 56 45 54 52 54 48 45 52 49 53 97 45 97 97 50 102 45 102 97 50 50 100 55 55 102 57 102 57 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 21:12:41 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7v5rl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7v5rl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7v5rl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 21:12:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 21:12:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 21:12:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 21:12:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-11 21:12:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:12:42.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3180" for this suite.

• [SLOW TEST:6.470 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":96,"skipped":1746,"failed":0}
SSSSS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:12:42.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching services
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:12:43.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6554" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":97,"skipped":1751,"failed":0}
S
------------------------------
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:12:43.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a secret [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a secret
STEP: listing secrets in all namespaces to ensure that there are more than zero
STEP: patching the secret
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:12:44.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2106" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":98,"skipped":1752,"failed":0}
SS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:12:44.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 11 21:12:44.964: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b5a3cba3-72a6-4363-ad20-4ad048651825" in namespace "downward-api-6686" to be "Succeeded or Failed"
May 11 21:12:44.991: INFO: Pod "downwardapi-volume-b5a3cba3-72a6-4363-ad20-4ad048651825": Phase="Pending", Reason="", readiness=false. Elapsed: 27.392017ms
May 11 21:12:47.059: INFO: Pod "downwardapi-volume-b5a3cba3-72a6-4363-ad20-4ad048651825": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0945458s
May 11 21:12:49.071: INFO: Pod "downwardapi-volume-b5a3cba3-72a6-4363-ad20-4ad048651825": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107303043s
May 11 21:12:51.075: INFO: Pod "downwardapi-volume-b5a3cba3-72a6-4363-ad20-4ad048651825": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.111086034s
STEP: Saw pod success
May 11 21:12:51.075: INFO: Pod "downwardapi-volume-b5a3cba3-72a6-4363-ad20-4ad048651825" satisfied condition "Succeeded or Failed"
May 11 21:12:51.078: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-b5a3cba3-72a6-4363-ad20-4ad048651825 container client-container: 
STEP: delete the pod
May 11 21:12:51.172: INFO: Waiting for pod downwardapi-volume-b5a3cba3-72a6-4363-ad20-4ad048651825 to disappear
May 11 21:12:51.234: INFO: Pod downwardapi-volume-b5a3cba3-72a6-4363-ad20-4ad048651825 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:12:51.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6686" for this suite.

• [SLOW TEST:7.094 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":99,"skipped":1754,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:12:51.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 11 21:12:53.130: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 11 21:12:55.158: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828373, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828373, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828373, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828373, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:12:57.349: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828373, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828373, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828373, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828373, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:12:59.515: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828373, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828373, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828373, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828373, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 11 21:13:02.540: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 21:13:02.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1418-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:13:03.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8591" for this suite.
STEP: Destroying namespace "webhook-8591-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.480 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":100,"skipped":1762,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:13:03.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 11 21:13:04.816: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2237bfd9-7b6b-4969-906e-ca6a8b52c1e4" in namespace "projected-7209" to be "Succeeded or Failed"
May 11 21:13:04.980: INFO: Pod "downwardapi-volume-2237bfd9-7b6b-4969-906e-ca6a8b52c1e4": Phase="Pending", Reason="", readiness=false. Elapsed: 164.502814ms
May 11 21:13:06.983: INFO: Pod "downwardapi-volume-2237bfd9-7b6b-4969-906e-ca6a8b52c1e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166878307s
May 11 21:13:09.532: INFO: Pod "downwardapi-volume-2237bfd9-7b6b-4969-906e-ca6a8b52c1e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.715928893s
May 11 21:13:11.585: INFO: Pod "downwardapi-volume-2237bfd9-7b6b-4969-906e-ca6a8b52c1e4": Phase="Running", Reason="", readiness=true. Elapsed: 6.769269638s
May 11 21:13:13.587: INFO: Pod "downwardapi-volume-2237bfd9-7b6b-4969-906e-ca6a8b52c1e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.771693706s
STEP: Saw pod success
May 11 21:13:13.587: INFO: Pod "downwardapi-volume-2237bfd9-7b6b-4969-906e-ca6a8b52c1e4" satisfied condition "Succeeded or Failed"
May 11 21:13:13.589: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-2237bfd9-7b6b-4969-906e-ca6a8b52c1e4 container client-container: 
STEP: delete the pod
May 11 21:13:13.760: INFO: Waiting for pod downwardapi-volume-2237bfd9-7b6b-4969-906e-ca6a8b52c1e4 to disappear
May 11 21:13:13.802: INFO: Pod downwardapi-volume-2237bfd9-7b6b-4969-906e-ca6a8b52c1e4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:13:13.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7209" for this suite.

• [SLOW TEST:9.959 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":101,"skipped":1763,"failed":0}
SSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:13:13.846: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:13:14.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3671" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":102,"skipped":1768,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:13:15.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 11 21:13:19.812: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 11 21:13:21.858: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828399, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828399, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828400, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828399, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:13:23.862: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828399, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828399, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828400, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828399, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 11 21:13:27.004: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:13:27.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4975" for this suite.
STEP: Destroying namespace "webhook-4975-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.692 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":103,"skipped":1779,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:13:27.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
May 11 21:13:37.684: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:13:37.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9951" for this suite.

• [SLOW TEST:9.846 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":104,"skipped":1828,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:13:37.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's command
May 11 21:13:38.000: INFO: Waiting up to 5m0s for pod "var-expansion-f379fcf5-eeba-43da-bb10-e0bea35d60ef" in namespace "var-expansion-908" to be "Succeeded or Failed"
May 11 21:13:38.159: INFO: Pod "var-expansion-f379fcf5-eeba-43da-bb10-e0bea35d60ef": Phase="Pending", Reason="", readiness=false. Elapsed: 158.963225ms
May 11 21:13:40.213: INFO: Pod "var-expansion-f379fcf5-eeba-43da-bb10-e0bea35d60ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212363399s
May 11 21:13:42.237: INFO: Pod "var-expansion-f379fcf5-eeba-43da-bb10-e0bea35d60ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.236367158s
May 11 21:13:44.264: INFO: Pod "var-expansion-f379fcf5-eeba-43da-bb10-e0bea35d60ef": Phase="Running", Reason="", readiness=true. Elapsed: 6.263811204s
May 11 21:13:46.267: INFO: Pod "var-expansion-f379fcf5-eeba-43da-bb10-e0bea35d60ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.267115741s
STEP: Saw pod success
May 11 21:13:46.267: INFO: Pod "var-expansion-f379fcf5-eeba-43da-bb10-e0bea35d60ef" satisfied condition "Succeeded or Failed"
May 11 21:13:46.274: INFO: Trying to get logs from node kali-worker2 pod var-expansion-f379fcf5-eeba-43da-bb10-e0bea35d60ef container dapi-container: 
STEP: delete the pod
May 11 21:13:47.008: INFO: Waiting for pod var-expansion-f379fcf5-eeba-43da-bb10-e0bea35d60ef to disappear
May 11 21:13:47.019: INFO: Pod var-expansion-f379fcf5-eeba-43da-bb10-e0bea35d60ef no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:13:47.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-908" for this suite.

• [SLOW TEST:9.204 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":105,"skipped":1853,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:13:47.028: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-2960
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
May 11 21:13:47.379: INFO: Found 0 stateful pods, waiting for 3
May 11 21:13:57.728: INFO: Found 2 stateful pods, waiting for 3
May 11 21:14:07.483: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May 11 21:14:07.483: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May 11 21:14:07.483: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
May 11 21:14:17.382: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May 11 21:14:17.382: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May 11 21:14:17.382: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
May 11 21:14:17.447: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
May 11 21:14:27.972: INFO: Updating stateful set ss2
May 11 21:14:28.089: INFO: Waiting for Pod statefulset-2960/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
May 11 21:14:39.033: INFO: Found 2 stateful pods, waiting for 3
May 11 21:14:49.037: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May 11 21:14:49.038: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May 11 21:14:49.038: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
May 11 21:14:49.059: INFO: Updating stateful set ss2
May 11 21:14:49.099: INFO: Waiting for Pod statefulset-2960/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May 11 21:14:59.123: INFO: Updating stateful set ss2
May 11 21:14:59.160: INFO: Waiting for StatefulSet statefulset-2960/ss2 to complete update
May 11 21:14:59.160: INFO: Waiting for Pod statefulset-2960/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May 11 21:15:09.226: INFO: Waiting for StatefulSet statefulset-2960/ss2 to complete update
May 11 21:15:09.226: INFO: Waiting for Pod statefulset-2960/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
May 11 21:15:19.164: INFO: Deleting all statefulset in ns statefulset-2960
May 11 21:15:19.166: INFO: Scaling statefulset ss2 to 0
May 11 21:15:49.183: INFO: Waiting for statefulset status.replicas updated to 0
May 11 21:15:49.186: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:15:49.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2960" for this suite.

• [SLOW TEST:122.179 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":106,"skipped":1861,"failed":0}
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:15:49.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
May 11 21:15:49.284: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:15:58.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8876" for this suite.

• [SLOW TEST:9.368 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":107,"skipped":1863,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:15:58.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-2afa9c67-6509-40a7-8bf2-53b9dd8f7a66
STEP: Creating a pod to test consume configMaps
May 11 21:15:59.217: INFO: Waiting up to 5m0s for pod "pod-configmaps-4b39a6b0-ffa1-4c86-82a1-e44091b515a8" in namespace "configmap-2953" to be "Succeeded or Failed"
May 11 21:15:59.418: INFO: Pod "pod-configmaps-4b39a6b0-ffa1-4c86-82a1-e44091b515a8": Phase="Pending", Reason="", readiness=false. Elapsed: 200.790824ms
May 11 21:16:01.741: INFO: Pod "pod-configmaps-4b39a6b0-ffa1-4c86-82a1-e44091b515a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.524212773s
May 11 21:16:03.789: INFO: Pod "pod-configmaps-4b39a6b0-ffa1-4c86-82a1-e44091b515a8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.572231301s
May 11 21:16:05.794: INFO: Pod "pod-configmaps-4b39a6b0-ffa1-4c86-82a1-e44091b515a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.576763404s
STEP: Saw pod success
May 11 21:16:05.794: INFO: Pod "pod-configmaps-4b39a6b0-ffa1-4c86-82a1-e44091b515a8" satisfied condition "Succeeded or Failed"
May 11 21:16:05.797: INFO: Trying to get logs from node kali-worker pod pod-configmaps-4b39a6b0-ffa1-4c86-82a1-e44091b515a8 container configmap-volume-test: 
STEP: delete the pod
May 11 21:16:06.084: INFO: Waiting for pod pod-configmaps-4b39a6b0-ffa1-4c86-82a1-e44091b515a8 to disappear
May 11 21:16:06.247: INFO: Pod pod-configmaps-4b39a6b0-ffa1-4c86-82a1-e44091b515a8 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:16:06.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2953" for this suite.

• [SLOW TEST:7.823 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":108,"skipped":1879,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:16:06.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
May 11 21:16:06.620: INFO: Waiting up to 5m0s for pod "pod-0b741449-f486-4877-9e14-8c311984510c" in namespace "emptydir-5963" to be "Succeeded or Failed"
May 11 21:16:06.684: INFO: Pod "pod-0b741449-f486-4877-9e14-8c311984510c": Phase="Pending", Reason="", readiness=false. Elapsed: 63.679536ms
May 11 21:16:08.908: INFO: Pod "pod-0b741449-f486-4877-9e14-8c311984510c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288372943s
May 11 21:16:11.106: INFO: Pod "pod-0b741449-f486-4877-9e14-8c311984510c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.486421819s
May 11 21:16:13.310: INFO: Pod "pod-0b741449-f486-4877-9e14-8c311984510c": Phase="Running", Reason="", readiness=true. Elapsed: 6.689822909s
May 11 21:16:15.314: INFO: Pod "pod-0b741449-f486-4877-9e14-8c311984510c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.694504775s
STEP: Saw pod success
May 11 21:16:15.314: INFO: Pod "pod-0b741449-f486-4877-9e14-8c311984510c" satisfied condition "Succeeded or Failed"
May 11 21:16:15.318: INFO: Trying to get logs from node kali-worker2 pod pod-0b741449-f486-4877-9e14-8c311984510c container test-container: 
STEP: delete the pod
May 11 21:16:15.354: INFO: Waiting for pod pod-0b741449-f486-4877-9e14-8c311984510c to disappear
May 11 21:16:15.423: INFO: Pod pod-0b741449-f486-4877-9e14-8c311984510c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:16:15.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5963" for this suite.

• [SLOW TEST:9.052 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":109,"skipped":1915,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:16:15.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
May 11 21:16:15.516: INFO: Waiting up to 5m0s for pod "pod-586e65d0-d067-45b2-b7ea-a17ed7f4a969" in namespace "emptydir-7232" to be "Succeeded or Failed"
May 11 21:16:15.520: INFO: Pod "pod-586e65d0-d067-45b2-b7ea-a17ed7f4a969": Phase="Pending", Reason="", readiness=false. Elapsed: 3.580158ms
May 11 21:16:17.771: INFO: Pod "pod-586e65d0-d067-45b2-b7ea-a17ed7f4a969": Phase="Pending", Reason="", readiness=false. Elapsed: 2.254778065s
May 11 21:16:19.775: INFO: Pod "pod-586e65d0-d067-45b2-b7ea-a17ed7f4a969": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.258111489s
STEP: Saw pod success
May 11 21:16:19.775: INFO: Pod "pod-586e65d0-d067-45b2-b7ea-a17ed7f4a969" satisfied condition "Succeeded or Failed"
May 11 21:16:19.777: INFO: Trying to get logs from node kali-worker2 pod pod-586e65d0-d067-45b2-b7ea-a17ed7f4a969 container test-container: 
STEP: delete the pod
May 11 21:16:19.953: INFO: Waiting for pod pod-586e65d0-d067-45b2-b7ea-a17ed7f4a969 to disappear
May 11 21:16:19.959: INFO: Pod pod-586e65d0-d067-45b2-b7ea-a17ed7f4a969 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:16:19.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7232" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":110,"skipped":1947,"failed":0}
SSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:16:19.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-92d7bad1-8a23-47d2-8792-b28e77396f00
STEP: Creating a pod to test consume secrets
May 11 21:16:20.207: INFO: Waiting up to 5m0s for pod "pod-secrets-8bc04b67-989e-49a7-9bb5-9a80208f045c" in namespace "secrets-3912" to be "Succeeded or Failed"
May 11 21:16:20.211: INFO: Pod "pod-secrets-8bc04b67-989e-49a7-9bb5-9a80208f045c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.593246ms
May 11 21:16:22.386: INFO: Pod "pod-secrets-8bc04b67-989e-49a7-9bb5-9a80208f045c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179151762s
May 11 21:16:24.490: INFO: Pod "pod-secrets-8bc04b67-989e-49a7-9bb5-9a80208f045c": Phase="Running", Reason="", readiness=true. Elapsed: 4.282283847s
May 11 21:16:26.492: INFO: Pod "pod-secrets-8bc04b67-989e-49a7-9bb5-9a80208f045c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.285089591s
STEP: Saw pod success
May 11 21:16:26.492: INFO: Pod "pod-secrets-8bc04b67-989e-49a7-9bb5-9a80208f045c" satisfied condition "Succeeded or Failed"
May 11 21:16:26.494: INFO: Trying to get logs from node kali-worker pod pod-secrets-8bc04b67-989e-49a7-9bb5-9a80208f045c container secret-volume-test: 
STEP: delete the pod
May 11 21:16:26.556: INFO: Waiting for pod pod-secrets-8bc04b67-989e-49a7-9bb5-9a80208f045c to disappear
May 11 21:16:26.564: INFO: Pod pod-secrets-8bc04b67-989e-49a7-9bb5-9a80208f045c no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:16:26.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3912" for this suite.
STEP: Destroying namespace "secret-namespace-5233" for this suite.

• [SLOW TEST:6.612 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":111,"skipped":1951,"failed":0}
SSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:16:26.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
May 11 21:16:27.848: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
May 11 21:16:28.414: INFO: Waiting for terminating namespaces to be deleted...
May 11 21:16:28.543: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
May 11 21:16:28.550: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 11 21:16:28.550: INFO: 	Container kube-proxy ready: true, restart count 0
May 11 21:16:28.550: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 11 21:16:28.550: INFO: 	Container kindnet-cni ready: true, restart count 1
May 11 21:16:28.550: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
May 11 21:16:28.555: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 11 21:16:28.555: INFO: 	Container kindnet-cni ready: true, restart count 0
May 11 21:16:28.555: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 11 21:16:28.555: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: verifying the node has the label node kali-worker
STEP: verifying the node has the label node kali-worker2
May 11 21:16:28.929: INFO: Pod kindnet-f8plf requesting resource cpu=100m on Node kali-worker
May 11 21:16:28.929: INFO: Pod kindnet-mcdh2 requesting resource cpu=100m on Node kali-worker2
May 11 21:16:28.929: INFO: Pod kube-proxy-mmnb6 requesting resource cpu=0m on Node kali-worker2
May 11 21:16:28.929: INFO: Pod kube-proxy-vrswj requesting resource cpu=0m on Node kali-worker
STEP: Starting Pods to consume most of the cluster CPU.
May 11 21:16:28.929: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker
May 11 21:16:28.936: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker2
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a47721bf-2ae8-4860-9197-598680230c16.160e15e469731d13], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9520/filler-pod-a47721bf-2ae8-4860-9197-598680230c16 to kali-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a47721bf-2ae8-4860-9197-598680230c16.160e15e4cfd11da4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a47721bf-2ae8-4860-9197-598680230c16.160e15e57e394209], Reason = [Created], Message = [Created container filler-pod-a47721bf-2ae8-4860-9197-598680230c16]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a47721bf-2ae8-4860-9197-598680230c16.160e15e59a1cde0f], Reason = [Started], Message = [Started container filler-pod-a47721bf-2ae8-4860-9197-598680230c16]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bbc705ac-0ca2-4650-9f5d-63fe647f4983.160e15e4615b71df], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9520/filler-pod-bbc705ac-0ca2-4650-9f5d-63fe647f4983 to kali-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bbc705ac-0ca2-4650-9f5d-63fe647f4983.160e15e4bdaf1625], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bbc705ac-0ca2-4650-9f5d-63fe647f4983.160e15e568a1d91a], Reason = [Created], Message = [Created container filler-pod-bbc705ac-0ca2-4650-9f5d-63fe647f4983]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bbc705ac-0ca2-4650-9f5d-63fe647f4983.160e15e57f5fb219], Reason = [Started], Message = [Started container filler-pod-bbc705ac-0ca2-4650-9f5d-63fe647f4983]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.160e15e5c882c392], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node kali-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node kali-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:16:36.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9520" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:9.726 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":275,"completed":112,"skipped":1959,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:16:36.304: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 21:16:43.642: INFO: Waiting up to 5m0s for pod "client-envvars-1762133c-5190-4677-8b5c-c366d6b1fd3d" in namespace "pods-2128" to be "Succeeded or Failed"
May 11 21:16:44.406: INFO: Pod "client-envvars-1762133c-5190-4677-8b5c-c366d6b1fd3d": Phase="Pending", Reason="", readiness=false. Elapsed: 763.54441ms
May 11 21:16:46.409: INFO: Pod "client-envvars-1762133c-5190-4677-8b5c-c366d6b1fd3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.766896635s
May 11 21:16:48.468: INFO: Pod "client-envvars-1762133c-5190-4677-8b5c-c366d6b1fd3d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.825771286s
May 11 21:16:50.574: INFO: Pod "client-envvars-1762133c-5190-4677-8b5c-c366d6b1fd3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.931857297s
STEP: Saw pod success
May 11 21:16:50.574: INFO: Pod "client-envvars-1762133c-5190-4677-8b5c-c366d6b1fd3d" satisfied condition "Succeeded or Failed"
May 11 21:16:50.577: INFO: Trying to get logs from node kali-worker pod client-envvars-1762133c-5190-4677-8b5c-c366d6b1fd3d container env3cont: 
STEP: delete the pod
May 11 21:16:51.050: INFO: Waiting for pod client-envvars-1762133c-5190-4677-8b5c-c366d6b1fd3d to disappear
May 11 21:16:51.400: INFO: Pod client-envvars-1762133c-5190-4677-8b5c-c366d6b1fd3d no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:16:51.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2128" for this suite.

• [SLOW TEST:15.250 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":113,"skipped":1976,"failed":0}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:16:51.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:17:10.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3991" for this suite.

• [SLOW TEST:18.890 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":114,"skipped":1979,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:17:10.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
May 11 21:17:17.086: INFO: Successfully updated pod "annotationupdatec5c614c5-5607-4062-beec-c1c98cec4490"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:17:21.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2680" for this suite.

• [SLOW TEST:10.927 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":115,"skipped":2002,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:17:21.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206
STEP: creating the pod
May 11 21:17:21.434: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2786'
May 11 21:17:21.799: INFO: stderr: ""
May 11 21:17:21.799: INFO: stdout: "pod/pause created\n"
May 11 21:17:21.799: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
May 11 21:17:21.799: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2786" to be "running and ready"
May 11 21:17:21.828: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 29.362714ms
May 11 21:17:23.921: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121931438s
May 11 21:17:26.015: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.215720209s
May 11 21:17:28.221: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.42151139s
May 11 21:17:28.221: INFO: Pod "pause" satisfied condition "running and ready"
May 11 21:17:28.221: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: adding the label testing-label with value testing-label-value to a pod
May 11 21:17:28.221: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2786'
May 11 21:17:28.585: INFO: stderr: ""
May 11 21:17:28.585: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
May 11 21:17:28.586: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2786'
May 11 21:17:28.879: INFO: stderr: ""
May 11 21:17:28.879: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          7s    testing-label-value\n"
STEP: removing the label testing-label of a pod
May 11 21:17:28.879: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2786'
May 11 21:17:29.629: INFO: stderr: ""
May 11 21:17:29.629: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
May 11 21:17:29.629: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2786'
May 11 21:17:29.914: INFO: stderr: ""
May 11 21:17:29.914: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213
STEP: using delete to clean up resources
May 11 21:17:29.914: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2786'
May 11 21:17:30.566: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 11 21:17:30.566: INFO: stdout: "pod \"pause\" force deleted\n"
May 11 21:17:30.566: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2786'
May 11 21:17:30.743: INFO: stderr: "No resources found in kubectl-2786 namespace.\n"
May 11 21:17:30.743: INFO: stdout: ""
May 11 21:17:30.743: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2786 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May 11 21:17:30.828: INFO: stderr: ""
May 11 21:17:30.828: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:17:30.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2786" for this suite.

• [SLOW TEST:9.461 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":275,"completed":116,"skipped":2062,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:17:30.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating all guestbook components
May 11 21:17:31.781: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

May 11 21:17:31.781: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8492'
May 11 21:17:33.105: INFO: stderr: ""
May 11 21:17:33.105: INFO: stdout: "service/agnhost-slave created\n"
May 11 21:17:33.105: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

May 11 21:17:33.105: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8492'
May 11 21:17:34.711: INFO: stderr: ""
May 11 21:17:34.711: INFO: stdout: "service/agnhost-master created\n"
May 11 21:17:34.711: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

May 11 21:17:34.711: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8492'
May 11 21:17:35.537: INFO: stderr: ""
May 11 21:17:35.537: INFO: stdout: "service/frontend created\n"
May 11 21:17:35.537: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

May 11 21:17:35.537: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8492'
May 11 21:17:35.921: INFO: stderr: ""
May 11 21:17:35.921: INFO: stdout: "deployment.apps/frontend created\n"
May 11 21:17:35.922: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

May 11 21:17:35.922: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8492'
May 11 21:17:36.261: INFO: stderr: ""
May 11 21:17:36.261: INFO: stdout: "deployment.apps/agnhost-master created\n"
May 11 21:17:36.261: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

May 11 21:17:36.261: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8492'
May 11 21:17:36.581: INFO: stderr: ""
May 11 21:17:36.581: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
May 11 21:17:36.582: INFO: Waiting for all frontend pods to be Running.
May 11 21:17:46.632: INFO: Waiting for frontend to serve content.
May 11 21:17:47.979: INFO: Trying to add a new entry to the guestbook.
May 11 21:17:48.226: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
May 11 21:17:48.233: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8492'
May 11 21:17:48.439: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 11 21:17:48.439: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
May 11 21:17:48.439: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8492'
May 11 21:17:48.674: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 11 21:17:48.674: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
May 11 21:17:48.674: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8492'
May 11 21:17:48.824: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 11 21:17:48.824: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
May 11 21:17:48.825: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8492'
May 11 21:17:49.003: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 11 21:17:49.003: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
May 11 21:17:49.004: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8492'
May 11 21:17:49.663: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 11 21:17:49.663: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
May 11 21:17:49.663: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8492'
May 11 21:17:50.504: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 11 21:17:50.504: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:17:50.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8492" for this suite.

• [SLOW TEST:20.201 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":275,"completed":117,"skipped":2071,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:17:51.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 21:17:52.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
May 11 21:17:55.325: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7288 create -f -'
May 11 21:17:59.152: INFO: stderr: ""
May 11 21:17:59.152: INFO: stdout: "e2e-test-crd-publish-openapi-7888-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
May 11 21:17:59.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7288 delete e2e-test-crd-publish-openapi-7888-crds test-cr'
May 11 21:17:59.284: INFO: stderr: ""
May 11 21:17:59.284: INFO: stdout: "e2e-test-crd-publish-openapi-7888-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
May 11 21:17:59.284: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7288 apply -f -'
May 11 21:17:59.551: INFO: stderr: ""
May 11 21:17:59.551: INFO: stdout: "e2e-test-crd-publish-openapi-7888-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
May 11 21:17:59.551: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7288 delete e2e-test-crd-publish-openapi-7888-crds test-cr'
May 11 21:17:59.733: INFO: stderr: ""
May 11 21:17:59.733: INFO: stdout: "e2e-test-crd-publish-openapi-7888-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
May 11 21:17:59.733: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7888-crds'
May 11 21:18:00.078: INFO: stderr: ""
May 11 21:18:00.078: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7888-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:18:03.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7288" for this suite.

• [SLOW TEST:12.097 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":118,"skipped":2100,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:18:03.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-7552
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 11 21:18:03.501: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 11 21:18:03.778: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 11 21:18:05.870: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 11 21:18:08.026: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 11 21:18:09.977: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 11 21:18:11.782: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:18:13.781: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:18:15.781: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:18:17.782: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:18:19.781: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:18:21.856: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:18:23.782: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:18:25.808: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 11 21:18:25.815: INFO: The status of Pod netserver-1 is Running (Ready = false)
May 11 21:18:27.819: INFO: The status of Pod netserver-1 is Running (Ready = false)
May 11 21:18:29.832: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 11 21:18:33.904: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.94:8080/dial?request=hostname&protocol=udp&host=10.244.2.93&port=8081&tries=1'] Namespace:pod-network-test-7552 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 11 21:18:33.904: INFO: >>> kubeConfig: /root/.kube/config
I0511 21:18:33.939111       7 log.go:172] (0xc0066322c0) (0xc0014a10e0) Create stream
I0511 21:18:33.939145       7 log.go:172] (0xc0066322c0) (0xc0014a10e0) Stream added, broadcasting: 1
I0511 21:18:33.940726       7 log.go:172] (0xc0066322c0) Reply frame received for 1
I0511 21:18:33.940769       7 log.go:172] (0xc0066322c0) (0xc0012679a0) Create stream
I0511 21:18:33.940785       7 log.go:172] (0xc0066322c0) (0xc0012679a0) Stream added, broadcasting: 3
I0511 21:18:33.942168       7 log.go:172] (0xc0066322c0) Reply frame received for 3
I0511 21:18:33.942198       7 log.go:172] (0xc0066322c0) (0xc0014a12c0) Create stream
I0511 21:18:33.942212       7 log.go:172] (0xc0066322c0) (0xc0014a12c0) Stream added, broadcasting: 5
I0511 21:18:33.943091       7 log.go:172] (0xc0066322c0) Reply frame received for 5
I0511 21:18:34.497410       7 log.go:172] (0xc0066322c0) Data frame received for 3
I0511 21:18:34.497453       7 log.go:172] (0xc0012679a0) (3) Data frame handling
I0511 21:18:34.497488       7 log.go:172] (0xc0012679a0) (3) Data frame sent
I0511 21:18:34.498172       7 log.go:172] (0xc0066322c0) Data frame received for 5
I0511 21:18:34.498193       7 log.go:172] (0xc0014a12c0) (5) Data frame handling
I0511 21:18:34.498339       7 log.go:172] (0xc0066322c0) Data frame received for 3
I0511 21:18:34.498363       7 log.go:172] (0xc0012679a0) (3) Data frame handling
I0511 21:18:34.500193       7 log.go:172] (0xc0066322c0) Data frame received for 1
I0511 21:18:34.500218       7 log.go:172] (0xc0014a10e0) (1) Data frame handling
I0511 21:18:34.500232       7 log.go:172] (0xc0014a10e0) (1) Data frame sent
I0511 21:18:34.500259       7 log.go:172] (0xc0066322c0) (0xc0014a10e0) Stream removed, broadcasting: 1
I0511 21:18:34.500289       7 log.go:172] (0xc0066322c0) Go away received
I0511 21:18:34.500463       7 log.go:172] (0xc0066322c0) (0xc0014a10e0) Stream removed, broadcasting: 1
I0511 21:18:34.500486       7 log.go:172] (0xc0066322c0) (0xc0012679a0) Stream removed, broadcasting: 3
I0511 21:18:34.500497       7 log.go:172] (0xc0066322c0) (0xc0014a12c0) Stream removed, broadcasting: 5
May 11 21:18:34.500: INFO: Waiting for responses: map[]
May 11 21:18:34.826: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.94:8080/dial?request=hostname&protocol=udp&host=10.244.1.133&port=8081&tries=1'] Namespace:pod-network-test-7552 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 11 21:18:34.826: INFO: >>> kubeConfig: /root/.kube/config
I0511 21:18:35.238047       7 log.go:172] (0xc0066a4210) (0xc002921680) Create stream
I0511 21:18:35.238073       7 log.go:172] (0xc0066a4210) (0xc002921680) Stream added, broadcasting: 1
I0511 21:18:35.239360       7 log.go:172] (0xc0066a4210) Reply frame received for 1
I0511 21:18:35.239385       7 log.go:172] (0xc0066a4210) (0xc001267c20) Create stream
I0511 21:18:35.239397       7 log.go:172] (0xc0066a4210) (0xc001267c20) Stream added, broadcasting: 3
I0511 21:18:35.239985       7 log.go:172] (0xc0066a4210) Reply frame received for 3
I0511 21:18:35.240009       7 log.go:172] (0xc0066a4210) (0xc0014a15e0) Create stream
I0511 21:18:35.240017       7 log.go:172] (0xc0066a4210) (0xc0014a15e0) Stream added, broadcasting: 5
I0511 21:18:35.240587       7 log.go:172] (0xc0066a4210) Reply frame received for 5
I0511 21:18:35.300525       7 log.go:172] (0xc0066a4210) Data frame received for 3
I0511 21:18:35.300561       7 log.go:172] (0xc001267c20) (3) Data frame handling
I0511 21:18:35.300606       7 log.go:172] (0xc001267c20) (3) Data frame sent
I0511 21:18:35.300858       7 log.go:172] (0xc0066a4210) Data frame received for 3
I0511 21:18:35.300874       7 log.go:172] (0xc001267c20) (3) Data frame handling
I0511 21:18:35.300906       7 log.go:172] (0xc0066a4210) Data frame received for 5
I0511 21:18:35.300921       7 log.go:172] (0xc0014a15e0) (5) Data frame handling
I0511 21:18:35.302389       7 log.go:172] (0xc0066a4210) Data frame received for 1
I0511 21:18:35.302419       7 log.go:172] (0xc002921680) (1) Data frame handling
I0511 21:18:35.302437       7 log.go:172] (0xc002921680) (1) Data frame sent
I0511 21:18:35.302451       7 log.go:172] (0xc0066a4210) (0xc002921680) Stream removed, broadcasting: 1
I0511 21:18:35.302521       7 log.go:172] (0xc0066a4210) (0xc002921680) Stream removed, broadcasting: 1
I0511 21:18:35.302534       7 log.go:172] (0xc0066a4210) (0xc001267c20) Stream removed, broadcasting: 3
I0511 21:18:35.302583       7 log.go:172] (0xc0066a4210) Go away received
I0511 21:18:35.302623       7 log.go:172] (0xc0066a4210) (0xc0014a15e0) Stream removed, broadcasting: 5
May 11 21:18:35.302: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:18:35.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7552" for this suite.

• [SLOW TEST:32.531 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":119,"skipped":2125,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:18:35.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
May 11 21:18:36.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:18:54.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1658" for this suite.

• [SLOW TEST:18.597 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":120,"skipped":2127,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:18:54.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
May 11 21:19:04.686: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:19:04.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5726" for this suite.

• [SLOW TEST:10.663 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":121,"skipped":2145,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:19:04.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 11 21:19:07.141: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 11 21:19:09.146: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828747, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828747, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828747, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828747, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:19:11.149: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828747, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828747, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828747, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828747, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 11 21:19:14.947: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:19:25.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1552" for this suite.
STEP: Destroying namespace "webhook-1552-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:20.409 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":122,"skipped":2179,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:19:25.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0511 21:19:26.801586       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May 11 21:19:26.801: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:19:26.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6484" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":123,"skipped":2220,"failed":0}

------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:19:26.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 11 21:19:27.399: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4aadfb90-50f4-4da7-bcfb-e9ebee0c052f" in namespace "downward-api-6558" to be "Succeeded or Failed"
May 11 21:19:27.433: INFO: Pod "downwardapi-volume-4aadfb90-50f4-4da7-bcfb-e9ebee0c052f": Phase="Pending", Reason="", readiness=false. Elapsed: 33.873563ms
May 11 21:19:29.436: INFO: Pod "downwardapi-volume-4aadfb90-50f4-4da7-bcfb-e9ebee0c052f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036232526s
May 11 21:19:31.479: INFO: Pod "downwardapi-volume-4aadfb90-50f4-4da7-bcfb-e9ebee0c052f": Phase="Running", Reason="", readiness=true. Elapsed: 4.079350329s
May 11 21:19:34.271: INFO: Pod "downwardapi-volume-4aadfb90-50f4-4da7-bcfb-e9ebee0c052f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.871929655s
STEP: Saw pod success
May 11 21:19:34.271: INFO: Pod "downwardapi-volume-4aadfb90-50f4-4da7-bcfb-e9ebee0c052f" satisfied condition "Succeeded or Failed"
May 11 21:19:34.275: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-4aadfb90-50f4-4da7-bcfb-e9ebee0c052f container client-container: 
STEP: delete the pod
May 11 21:19:35.681: INFO: Waiting for pod downwardapi-volume-4aadfb90-50f4-4da7-bcfb-e9ebee0c052f to disappear
May 11 21:19:35.736: INFO: Pod downwardapi-volume-4aadfb90-50f4-4da7-bcfb-e9ebee0c052f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:19:35.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6558" for this suite.

• [SLOW TEST:9.280 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":124,"skipped":2220,"failed":0}
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:19:36.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
May 11 21:19:36.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
May 11 21:19:47.708: INFO: >>> kubeConfig: /root/.kube/config
May 11 21:19:50.834: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:20:01.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1448" for this suite.

• [SLOW TEST:24.989 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":125,"skipped":2220,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:20:01.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 11 21:20:01.222: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a8ed312e-8a4a-41e2-a9ee-1ca669d5ee58" in namespace "projected-8609" to be "Succeeded or Failed"
May 11 21:20:01.360: INFO: Pod "downwardapi-volume-a8ed312e-8a4a-41e2-a9ee-1ca669d5ee58": Phase="Pending", Reason="", readiness=false. Elapsed: 138.169563ms
May 11 21:20:03.713: INFO: Pod "downwardapi-volume-a8ed312e-8a4a-41e2-a9ee-1ca669d5ee58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.491335417s
May 11 21:20:06.564: INFO: Pod "downwardapi-volume-a8ed312e-8a4a-41e2-a9ee-1ca669d5ee58": Phase="Running", Reason="", readiness=true. Elapsed: 5.34207832s
May 11 21:20:08.568: INFO: Pod "downwardapi-volume-a8ed312e-8a4a-41e2-a9ee-1ca669d5ee58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.346420155s
STEP: Saw pod success
May 11 21:20:08.568: INFO: Pod "downwardapi-volume-a8ed312e-8a4a-41e2-a9ee-1ca669d5ee58" satisfied condition "Succeeded or Failed"
May 11 21:20:08.571: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-a8ed312e-8a4a-41e2-a9ee-1ca669d5ee58 container client-container: 
STEP: delete the pod
May 11 21:20:08.885: INFO: Waiting for pod downwardapi-volume-a8ed312e-8a4a-41e2-a9ee-1ca669d5ee58 to disappear
May 11 21:20:08.935: INFO: Pod downwardapi-volume-a8ed312e-8a4a-41e2-a9ee-1ca669d5ee58 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:20:08.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8609" for this suite.

• [SLOW TEST:8.020 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":126,"skipped":2222,"failed":0}
SSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:20:09.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:20:13.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7708" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":127,"skipped":2225,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:20:13.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:20:14.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7807" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":275,"completed":128,"skipped":2233,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:20:14.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 11 21:20:15.446: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 11 21:20:17.456: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828815, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828815, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828815, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828815, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:20:19.497: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828815, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828815, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828815, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828815, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 11 21:20:22.661: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
May 11 21:20:26.725: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config attach --namespace=webhook-5830 to-be-attached-pod -i -c=container1'
May 11 21:20:26.831: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:20:26.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5830" for this suite.
STEP: Destroying namespace "webhook-5830-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.422 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":129,"skipped":2246,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:20:27.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 21:20:29.512: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:20:34.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1109" for this suite.

• [SLOW TEST:6.595 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":130,"skipped":2274,"failed":0}
S
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:20:34.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 21:20:34.607: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-6fd735c8-7616-4be4-b297-67e6d6d76af8" in namespace "security-context-test-8370" to be "Succeeded or Failed"
May 11 21:20:34.630: INFO: Pod "busybox-readonly-false-6fd735c8-7616-4be4-b297-67e6d6d76af8": Phase="Pending", Reason="", readiness=false. Elapsed: 22.564187ms
May 11 21:20:36.918: INFO: Pod "busybox-readonly-false-6fd735c8-7616-4be4-b297-67e6d6d76af8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.311400077s
May 11 21:20:38.924: INFO: Pod "busybox-readonly-false-6fd735c8-7616-4be4-b297-67e6d6d76af8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316878527s
May 11 21:20:40.990: INFO: Pod "busybox-readonly-false-6fd735c8-7616-4be4-b297-67e6d6d76af8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.383210205s
May 11 21:20:40.990: INFO: Pod "busybox-readonly-false-6fd735c8-7616-4be4-b297-67e6d6d76af8" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:20:40.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8370" for this suite.

• [SLOW TEST:6.841 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  When creating a pod with readOnlyRootFilesystem
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":131,"skipped":2275,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:20:41.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-3091, will wait for the garbage collector to delete the pods
May 11 21:20:51.861: INFO: Deleting Job.batch foo took: 39.563125ms
May 11 21:20:52.261: INFO: Terminating Job.batch foo pods took: 400.257339ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:21:33.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3091" for this suite.

• [SLOW TEST:52.801 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":132,"skipped":2291,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:21:33.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 11 21:21:34.645: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 11 21:21:37.361: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828894, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828894, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828894, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828894, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:21:39.522: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828894, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828894, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828894, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828894, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 11 21:21:42.523: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:21:43.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1180" for this suite.
STEP: Destroying namespace "webhook-1180-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.184 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":133,"skipped":2304,"failed":0}
SS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:21:43.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
May 11 21:21:43.246: INFO: Waiting up to 5m0s for pod "downward-api-f8bc45de-23e8-477b-91f1-9491243a5534" in namespace "downward-api-1172" to be "Succeeded or Failed"
May 11 21:21:43.307: INFO: Pod "downward-api-f8bc45de-23e8-477b-91f1-9491243a5534": Phase="Pending", Reason="", readiness=false. Elapsed: 61.369296ms
May 11 21:21:45.311: INFO: Pod "downward-api-f8bc45de-23e8-477b-91f1-9491243a5534": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064842372s
May 11 21:21:47.314: INFO: Pod "downward-api-f8bc45de-23e8-477b-91f1-9491243a5534": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068026994s
May 11 21:21:49.367: INFO: Pod "downward-api-f8bc45de-23e8-477b-91f1-9491243a5534": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.121167492s
STEP: Saw pod success
May 11 21:21:49.367: INFO: Pod "downward-api-f8bc45de-23e8-477b-91f1-9491243a5534" satisfied condition "Succeeded or Failed"
May 11 21:21:49.427: INFO: Trying to get logs from node kali-worker2 pod downward-api-f8bc45de-23e8-477b-91f1-9491243a5534 container dapi-container: 
STEP: delete the pod
May 11 21:21:49.752: INFO: Waiting for pod downward-api-f8bc45de-23e8-477b-91f1-9491243a5534 to disappear
May 11 21:21:49.779: INFO: Pod downward-api-f8bc45de-23e8-477b-91f1-9491243a5534 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:21:49.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1172" for this suite.

• [SLOW TEST:6.671 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":134,"skipped":2306,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:21:49.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
May 11 21:21:50.357: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-3841 /api/v1/namespaces/watch-3841/configmaps/e2e-watch-test-watch-closed eaca9839-1419-4059-974a-95f31dff52f3 3521166 0 2020-05-11 21:21:50 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-11 21:21:50 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May 11 21:21:50.357: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-3841 /api/v1/namespaces/watch-3841/configmaps/e2e-watch-test-watch-closed eaca9839-1419-4059-974a-95f31dff52f3 3521167 0 2020-05-11 21:21:50 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-11 21:21:50 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
May 11 21:21:50.371: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-3841 /api/v1/namespaces/watch-3841/configmaps/e2e-watch-test-watch-closed eaca9839-1419-4059-974a-95f31dff52f3 3521168 0 2020-05-11 21:21:50 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-11 21:21:50 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May 11 21:21:50.371: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-3841 /api/v1/namespaces/watch-3841/configmaps/e2e-watch-test-watch-closed eaca9839-1419-4059-974a-95f31dff52f3 3521169 0 2020-05-11 21:21:50 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-11 21:21:50 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:21:50.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3841" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":135,"skipped":2328,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:21:50.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 21:21:50.577: INFO: Waiting up to 5m0s for pod "busybox-user-65534-fc958a3d-98de-4677-b219-d12a34b51e1b" in namespace "security-context-test-2096" to be "Succeeded or Failed"
May 11 21:21:50.846: INFO: Pod "busybox-user-65534-fc958a3d-98de-4677-b219-d12a34b51e1b": Phase="Pending", Reason="", readiness=false. Elapsed: 268.189508ms
May 11 21:21:52.849: INFO: Pod "busybox-user-65534-fc958a3d-98de-4677-b219-d12a34b51e1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.271204508s
May 11 21:21:54.911: INFO: Pod "busybox-user-65534-fc958a3d-98de-4677-b219-d12a34b51e1b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.333695806s
May 11 21:21:56.917: INFO: Pod "busybox-user-65534-fc958a3d-98de-4677-b219-d12a34b51e1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.339851197s
May 11 21:21:56.917: INFO: Pod "busybox-user-65534-fc958a3d-98de-4677-b219-d12a34b51e1b" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:21:56.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2096" for this suite.

• [SLOW TEST:6.546 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":136,"skipped":2341,"failed":0}
SSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:21:56.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:22:57.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7650" for this suite.

• [SLOW TEST:60.130 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":137,"skipped":2344,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:22:57.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
May 11 21:22:57.272: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:23:11.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3221" for this suite.

• [SLOW TEST:14.431 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":138,"skipped":2384,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:23:11.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-fdc8l in namespace proxy-4732
I0511 21:23:13.141023       7 runners.go:190] Created replication controller with name: proxy-service-fdc8l, namespace: proxy-4732, replica count: 1
I0511 21:23:14.191538       7 runners.go:190] proxy-service-fdc8l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0511 21:23:15.193322       7 runners.go:190] proxy-service-fdc8l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0511 21:23:16.193563       7 runners.go:190] proxy-service-fdc8l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0511 21:23:17.193789       7 runners.go:190] proxy-service-fdc8l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0511 21:23:18.194030       7 runners.go:190] proxy-service-fdc8l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0511 21:23:19.194297       7 runners.go:190] proxy-service-fdc8l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0511 21:23:20.194502       7 runners.go:190] proxy-service-fdc8l Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0511 21:23:21.194682       7 runners.go:190] proxy-service-fdc8l Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0511 21:23:22.194850       7 runners.go:190] proxy-service-fdc8l Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0511 21:23:23.195081       7 runners.go:190] proxy-service-fdc8l Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May 11 21:23:24.165: INFO: setup took 11.587700873s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
May 11 21:23:24.195: INFO: (0) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:162/proxy/: bar (200; 28.967098ms)
May 11 21:23:24.195: INFO: (0) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:160/proxy/: foo (200; 29.071893ms)
May 11 21:23:24.195: INFO: (0) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:160/proxy/: foo (200; 29.053229ms)
May 11 21:23:24.195: INFO: (0) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868/proxy/: test (200; 29.277693ms)
May 11 21:23:24.199: INFO: (0) /api/v1/namespaces/proxy-4732/services/http:proxy-service-fdc8l:portname1/proxy/: foo (200; 33.355223ms)
May 11 21:23:24.199: INFO: (0) /api/v1/namespaces/proxy-4732/services/proxy-service-fdc8l:portname1/proxy/: foo (200; 33.574398ms)
May 11 21:23:24.199: INFO: (0) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:1080/proxy/: ... (200; 33.44589ms)
May 11 21:23:24.199: INFO: (0) /api/v1/namespaces/proxy-4732/services/http:proxy-service-fdc8l:portname2/proxy/: bar (200; 33.427437ms)
May 11 21:23:24.199: INFO: (0) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:1080/proxy/: test<... (200; 33.55921ms)
May 11 21:23:24.205: INFO: (0) /api/v1/namespaces/proxy-4732/services/proxy-service-fdc8l:portname2/proxy/: bar (200; 39.341101ms)
May 11 21:23:24.205: INFO: (0) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:162/proxy/: bar (200; 39.293409ms)
May 11 21:23:24.212: INFO: (0) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:443/proxy/: test (200; 106.88586ms)
May 11 21:23:24.320: INFO: (1) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:160/proxy/: foo (200; 106.958004ms)
May 11 21:23:24.320: INFO: (1) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:1080/proxy/: test<... (200; 106.99234ms)
May 11 21:23:24.320: INFO: (1) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:460/proxy/: tls baz (200; 107.157256ms)
May 11 21:23:24.320: INFO: (1) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:160/proxy/: foo (200; 106.944095ms)
May 11 21:23:24.320: INFO: (1) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:443/proxy/: ... (200; 107.244402ms)
May 11 21:23:24.320: INFO: (1) /api/v1/namespaces/proxy-4732/services/https:proxy-service-fdc8l:tlsportname1/proxy/: tls baz (200; 107.220714ms)
May 11 21:23:24.961: INFO: (1) /api/v1/namespaces/proxy-4732/services/https:proxy-service-fdc8l:tlsportname2/proxy/: tls qux (200; 748.224767ms)
May 11 21:23:25.290: INFO: (1) /api/v1/namespaces/proxy-4732/services/http:proxy-service-fdc8l:portname1/proxy/: foo (200; 1.077046181s)
May 11 21:23:25.290: INFO: (1) /api/v1/namespaces/proxy-4732/services/http:proxy-service-fdc8l:portname2/proxy/: bar (200; 1.077070377s)
May 11 21:23:25.290: INFO: (1) /api/v1/namespaces/proxy-4732/services/proxy-service-fdc8l:portname2/proxy/: bar (200; 1.077196922s)
May 11 21:23:25.290: INFO: (1) /api/v1/namespaces/proxy-4732/services/proxy-service-fdc8l:portname1/proxy/: foo (200; 1.0773779s)
May 11 21:23:25.301: INFO: (2) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868/proxy/: test (200; 10.635477ms)
May 11 21:23:25.301: INFO: (2) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:162/proxy/: bar (200; 10.596958ms)
May 11 21:23:25.301: INFO: (2) /api/v1/namespaces/proxy-4732/services/proxy-service-fdc8l:portname2/proxy/: bar (200; 10.728072ms)
May 11 21:23:25.301: INFO: (2) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:462/proxy/: tls qux (200; 10.980646ms)
May 11 21:23:25.301: INFO: (2) /api/v1/namespaces/proxy-4732/services/proxy-service-fdc8l:portname1/proxy/: foo (200; 11.124362ms)
May 11 21:23:25.301: INFO: (2) /api/v1/namespaces/proxy-4732/services/https:proxy-service-fdc8l:tlsportname2/proxy/: tls qux (200; 10.847094ms)
May 11 21:23:25.301: INFO: (2) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:443/proxy/: test<... (200; 11.491272ms)
May 11 21:23:25.302: INFO: (2) /api/v1/namespaces/proxy-4732/services/http:proxy-service-fdc8l:portname2/proxy/: bar (200; 11.196157ms)
May 11 21:23:25.303: INFO: (2) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:160/proxy/: foo (200; 12.23133ms)
May 11 21:23:25.303: INFO: (2) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:162/proxy/: bar (200; 12.224836ms)
May 11 21:23:25.303: INFO: (2) /api/v1/namespaces/proxy-4732/services/https:proxy-service-fdc8l:tlsportname1/proxy/: tls baz (200; 12.161139ms)
May 11 21:23:25.303: INFO: (2) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:460/proxy/: tls baz (200; 12.502675ms)
May 11 21:23:25.303: INFO: (2) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:160/proxy/: foo (200; 12.349795ms)
May 11 21:23:25.303: INFO: (2) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:1080/proxy/: ... (200; 11.981348ms)
May 11 21:23:25.307: INFO: (3) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:162/proxy/: bar (200; 4.21954ms)
May 11 21:23:25.308: INFO: (3) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868/proxy/: test (200; 5.408984ms)
May 11 21:23:25.308: INFO: (3) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:443/proxy/: test<... (200; 5.379839ms)
May 11 21:23:25.308: INFO: (3) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:1080/proxy/: ... (200; 5.358347ms)
May 11 21:23:25.308: INFO: (3) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:160/proxy/: foo (200; 5.426643ms)
May 11 21:23:25.308: INFO: (3) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:460/proxy/: tls baz (200; 5.432316ms)
May 11 21:23:25.308: INFO: (3) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:462/proxy/: tls qux (200; 5.371856ms)
May 11 21:23:25.310: INFO: (3) /api/v1/namespaces/proxy-4732/services/proxy-service-fdc8l:portname1/proxy/: foo (200; 6.668073ms)
May 11 21:23:25.310: INFO: (3) /api/v1/namespaces/proxy-4732/services/http:proxy-service-fdc8l:portname1/proxy/: foo (200; 6.654314ms)
May 11 21:23:25.310: INFO: (3) /api/v1/namespaces/proxy-4732/services/https:proxy-service-fdc8l:tlsportname1/proxy/: tls baz (200; 6.619887ms)
May 11 21:23:25.310: INFO: (3) /api/v1/namespaces/proxy-4732/services/proxy-service-fdc8l:portname2/proxy/: bar (200; 6.601181ms)
May 11 21:23:25.310: INFO: (3) /api/v1/namespaces/proxy-4732/services/https:proxy-service-fdc8l:tlsportname2/proxy/: tls qux (200; 6.813023ms)
May 11 21:23:25.310: INFO: (3) /api/v1/namespaces/proxy-4732/services/http:proxy-service-fdc8l:portname2/proxy/: bar (200; 6.79594ms)
May 11 21:23:25.312: INFO: (4) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:462/proxy/: tls qux (200; 2.179175ms)
May 11 21:23:25.314: INFO: (4) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868/proxy/: test (200; 3.906155ms)
May 11 21:23:25.314: INFO: (4) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:1080/proxy/: test<... (200; 3.992679ms)
May 11 21:23:25.314: INFO: (4) /api/v1/namespaces/proxy-4732/services/proxy-service-fdc8l:portname1/proxy/: foo (200; 4.373471ms)
May 11 21:23:25.314: INFO: (4) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:443/proxy/: ... (200; 6.542992ms)
May 11 21:23:25.317: INFO: (4) /api/v1/namespaces/proxy-4732/services/http:proxy-service-fdc8l:portname1/proxy/: foo (200; 7.168042ms)
May 11 21:23:25.320: INFO: (5) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:160/proxy/: foo (200; 3.048204ms)
May 11 21:23:25.321: INFO: (5) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:460/proxy/: tls baz (200; 3.69354ms)
May 11 21:23:25.322: INFO: (5) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:162/proxy/: bar (200; 5.082958ms)
May 11 21:23:25.323: INFO: (5) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:1080/proxy/: ... (200; 5.46551ms)
May 11 21:23:25.323: INFO: (5) /api/v1/namespaces/proxy-4732/services/proxy-service-fdc8l:portname2/proxy/: bar (200; 5.522429ms)
May 11 21:23:25.323: INFO: (5) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:462/proxy/: tls qux (200; 5.537192ms)
May 11 21:23:25.323: INFO: (5) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868/proxy/: test (200; 5.566627ms)
May 11 21:23:25.323: INFO: (5) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:160/proxy/: foo (200; 5.738531ms)
May 11 21:23:25.323: INFO: (5) /api/v1/namespaces/proxy-4732/services/https:proxy-service-fdc8l:tlsportname1/proxy/: tls baz (200; 5.849267ms)
May 11 21:23:25.323: INFO: (5) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:443/proxy/: test<... (200; 7.425468ms)
May 11 21:23:25.330: INFO: (6) /api/v1/namespaces/proxy-4732/services/http:proxy-service-fdc8l:portname2/proxy/: bar (200; 5.448158ms)
May 11 21:23:25.330: INFO: (6) /api/v1/namespaces/proxy-4732/services/http:proxy-service-fdc8l:portname1/proxy/: foo (200; 5.535694ms)
May 11 21:23:25.331: INFO: (6) /api/v1/namespaces/proxy-4732/services/https:proxy-service-fdc8l:tlsportname2/proxy/: tls qux (200; 5.873063ms)
May 11 21:23:25.331: INFO: (6) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:460/proxy/: tls baz (200; 6.052218ms)
May 11 21:23:25.331: INFO: (6) /api/v1/namespaces/proxy-4732/services/proxy-service-fdc8l:portname2/proxy/: bar (200; 6.207678ms)
May 11 21:23:25.331: INFO: (6) /api/v1/namespaces/proxy-4732/services/proxy-service-fdc8l:portname1/proxy/: foo (200; 6.304442ms)
May 11 21:23:25.331: INFO: (6) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:462/proxy/: tls qux (200; 6.312592ms)
May 11 21:23:25.331: INFO: (6) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:160/proxy/: foo (200; 6.332734ms)
May 11 21:23:25.331: INFO: (6) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:443/proxy/: test (200; 6.433748ms)
May 11 21:23:25.331: INFO: (6) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:1080/proxy/: test<... (200; 6.477306ms)
May 11 21:23:25.331: INFO: (6) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:160/proxy/: foo (200; 6.467146ms)
May 11 21:23:25.331: INFO: (6) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:162/proxy/: bar (200; 6.492257ms)
May 11 21:23:25.331: INFO: (6) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:162/proxy/: bar (200; 6.545775ms)
May 11 21:23:25.331: INFO: (6) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:1080/proxy/: ... (200; 6.614079ms)
May 11 21:23:25.332: INFO: (6) /api/v1/namespaces/proxy-4732/services/https:proxy-service-fdc8l:tlsportname1/proxy/: tls baz (200; 7.018186ms)
May 11 21:23:25.336: INFO: (7) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868/proxy/: test (200; 3.68552ms)
May 11 21:23:25.337: INFO: (7) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:1080/proxy/: test<... (200; 4.585068ms)
May 11 21:23:25.337: INFO: (7) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:160/proxy/: foo (200; 5.016047ms)
May 11 21:23:25.338: INFO: (7) /api/v1/namespaces/proxy-4732/services/http:proxy-service-fdc8l:portname2/proxy/: bar (200; 5.601891ms)
May 11 21:23:25.338: INFO: (7) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:460/proxy/: tls baz (200; 5.612948ms)
May 11 21:23:25.338: INFO: (7) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:1080/proxy/: ... (200; 5.612927ms)
May 11 21:23:25.338: INFO: (7) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:443/proxy/: ... (200; 4.074566ms)
May 11 21:23:25.348: INFO: (8) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:162/proxy/: bar (200; 8.264122ms)
May 11 21:23:25.348: INFO: (8) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:443/proxy/: test<... (200; 8.401768ms)
May 11 21:23:25.348: INFO: (8) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868/proxy/: test (200; 8.37847ms)
May 11 21:23:25.348: INFO: (8) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:160/proxy/: foo (200; 8.391113ms)
May 11 21:23:25.349: INFO: (8) /api/v1/namespaces/proxy-4732/services/http:proxy-service-fdc8l:portname1/proxy/: foo (200; 8.943797ms)
May 11 21:23:25.349: INFO: (8) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:160/proxy/: foo (200; 9.212587ms)
May 11 21:23:25.349: INFO: (8) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:462/proxy/: tls qux (200; 9.254585ms)
May 11 21:23:25.350: INFO: (8) /api/v1/namespaces/proxy-4732/services/https:proxy-service-fdc8l:tlsportname2/proxy/: tls qux (200; 10.372022ms)
May 11 21:23:25.350: INFO: (8) /api/v1/namespaces/proxy-4732/services/proxy-service-fdc8l:portname2/proxy/: bar (200; 10.422397ms)
May 11 21:23:25.350: INFO: (8) /api/v1/namespaces/proxy-4732/services/proxy-service-fdc8l:portname1/proxy/: foo (200; 10.620262ms)
May 11 21:23:25.351: INFO: (8) /api/v1/namespaces/proxy-4732/services/http:proxy-service-fdc8l:portname2/proxy/: bar (200; 10.817915ms)
May 11 21:23:25.351: INFO: (8) /api/v1/namespaces/proxy-4732/services/https:proxy-service-fdc8l:tlsportname1/proxy/: tls baz (200; 11.016555ms)
May 11 21:23:25.357: INFO: (9) /api/v1/namespaces/proxy-4732/services/proxy-service-fdc8l:portname2/proxy/: bar (200; 6.02955ms)
May 11 21:23:25.357: INFO: (9) /api/v1/namespaces/proxy-4732/services/https:proxy-service-fdc8l:tlsportname2/proxy/: tls qux (200; 6.31864ms)
May 11 21:23:25.357: INFO: (9) /api/v1/namespaces/proxy-4732/services/http:proxy-service-fdc8l:portname2/proxy/: bar (200; 6.224471ms)
May 11 21:23:25.357: INFO: (9) /api/v1/namespaces/proxy-4732/services/proxy-service-fdc8l:portname1/proxy/: foo (200; 6.443815ms)
May 11 21:23:25.358: INFO: (9) /api/v1/namespaces/proxy-4732/services/https:proxy-service-fdc8l:tlsportname1/proxy/: tls baz (200; 7.184268ms)
May 11 21:23:25.358: INFO: (9) /api/v1/namespaces/proxy-4732/services/http:proxy-service-fdc8l:portname1/proxy/: foo (200; 7.224567ms)
May 11 21:23:25.358: INFO: (9) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:462/proxy/: tls qux (200; 7.216897ms)
May 11 21:23:25.358: INFO: (9) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:160/proxy/: foo (200; 7.246376ms)
May 11 21:23:25.358: INFO: (9) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:443/proxy/: test<... (200; 7.366218ms)
May 11 21:23:25.358: INFO: (9) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:1080/proxy/: ... (200; 7.454938ms)
May 11 21:23:25.359: INFO: (9) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868/proxy/: test (200; 7.78027ms)
May 11 21:23:25.359: INFO: (9) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:460/proxy/: tls baz (200; 7.854238ms)
May 11 21:23:25.359: INFO: (9) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:160/proxy/: foo (200; 7.878507ms)
May 11 21:23:25.359: INFO: (9) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:162/proxy/: bar (200; 7.92169ms)
May 11 21:23:25.364: INFO: (10) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:1080/proxy/: ... (200; 4.950721ms)
May 11 21:23:25.364: INFO: (10) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868/proxy/: test (200; 5.03776ms)
May 11 21:23:25.365: INFO: (10) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:162/proxy/: bar (200; 4.995848ms)
May 11 21:23:25.365: INFO: (10) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:162/proxy/: bar (200; 4.935397ms)
May 11 21:23:25.365: INFO: (10) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:160/proxy/: foo (200; 5.600158ms)
May 11 21:23:25.365: INFO: (10) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:462/proxy/: tls qux (200; 5.623067ms)
May 11 21:23:25.365: INFO: (10) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:443/proxy/: test<... (200; 6.180383ms)
May 11 21:23:25.367: INFO: (10) /api/v1/namespaces/proxy-4732/services/http:proxy-service-fdc8l:portname2/proxy/: bar (200; 7.201155ms)
May 11 21:23:25.367: INFO: (10) /api/v1/namespaces/proxy-4732/services/proxy-service-fdc8l:portname2/proxy/: bar (200; 7.376613ms)
May 11 21:23:25.367: INFO: (10) /api/v1/namespaces/proxy-4732/services/http:proxy-service-fdc8l:portname1/proxy/: foo (200; 7.274156ms)
May 11 21:23:25.367: INFO: (10) /api/v1/namespaces/proxy-4732/services/https:proxy-service-fdc8l:tlsportname2/proxy/: tls qux (200; 7.199719ms)
May 11 21:23:25.367: INFO: (10) /api/v1/namespaces/proxy-4732/services/https:proxy-service-fdc8l:tlsportname1/proxy/: tls baz (200; 7.313794ms)
May 11 21:23:25.367: INFO: (10) /api/v1/namespaces/proxy-4732/services/proxy-service-fdc8l:portname1/proxy/: foo (200; 8.056943ms)
May 11 21:23:25.370: INFO: (11) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:1080/proxy/: ... (200; 2.58389ms)
May 11 21:23:25.370: INFO: (11) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:443/proxy/: test (200; 4.234039ms)
May 11 21:23:25.371: INFO: (11) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:162/proxy/: bar (200; 4.219812ms)
May 11 21:23:25.371: INFO: (11) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:1080/proxy/: test<... (200; 4.218191ms)
May 11 21:23:25.372: INFO: (11) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:460/proxy/: tls baz (200; 4.663641ms)
May 11 21:23:25.372: INFO: (11) /api/v1/namespaces/proxy-4732/services/http:proxy-service-fdc8l:portname1/proxy/: foo (200; 4.813878ms)
May 11 21:23:25.372: INFO: (11) /api/v1/namespaces/proxy-4732/services/https:proxy-service-fdc8l:tlsportname1/proxy/: tls baz (200; 5.034906ms)
May 11 21:23:25.372: INFO: (11) /api/v1/namespaces/proxy-4732/services/http:proxy-service-fdc8l:portname2/proxy/: bar (200; 5.097938ms)
May 11 21:23:25.372: INFO: (11) /api/v1/namespaces/proxy-4732/services/https:proxy-service-fdc8l:tlsportname2/proxy/: tls qux (200; 5.060125ms)
May 11 21:23:25.373: INFO: (11) /api/v1/namespaces/proxy-4732/services/proxy-service-fdc8l:portname2/proxy/: bar (200; 5.862817ms)
May 11 21:23:25.376: INFO: (12) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:462/proxy/: tls qux (200; 3.01986ms)
May 11 21:23:25.377: INFO: (12) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:160/proxy/: foo (200; 3.57551ms)
May 11 21:23:25.377: INFO: (12) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:160/proxy/: foo (200; 3.662895ms)
May 11 21:23:25.377: INFO: (12) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:162/proxy/: bar (200; 3.692369ms)
May 11 21:23:25.377: INFO: (12) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:162/proxy/: bar (200; 3.611887ms)
May 11 21:23:25.377: INFO: (12) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868/proxy/: test (200; 3.660229ms)
May 11 21:23:25.377: INFO: (12) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:1080/proxy/: ... (200; 3.644592ms)
May 11 21:23:25.377: INFO: (12) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:1080/proxy/: test<... (200; 3.681832ms)
May 11 21:23:25.377: INFO: (12) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:460/proxy/: tls baz (200; 3.715766ms)
May 11 21:23:25.377: INFO: (12) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:443/proxy/: ... (200; 3.023998ms)
May 11 21:23:25.381: INFO: (13) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:162/proxy/: bar (200; 3.218093ms)
May 11 21:23:25.381: INFO: (13) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:160/proxy/: foo (200; 3.1124ms)
May 11 21:23:25.381: INFO: (13) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:443/proxy/: test (200; 3.276884ms)
May 11 21:23:25.381: INFO: (13) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:460/proxy/: tls baz (200; 3.282715ms)
May 11 21:23:25.381: INFO: (13) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:162/proxy/: bar (200; 3.381794ms)
May 11 21:23:25.381: INFO: (13) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:462/proxy/: tls qux (200; 3.361013ms)
May 11 21:23:25.382: INFO: (13) /api/v1/namespaces/proxy-4732/services/http:proxy-service-fdc8l:portname1/proxy/: foo (200; 4.354304ms)
May 11 21:23:25.382: INFO: (13) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:160/proxy/: foo (200; 4.440078ms)
May 11 21:23:25.382: INFO: (13) /api/v1/namespaces/proxy-4732/services/http:proxy-service-fdc8l:portname2/proxy/: bar (200; 4.622758ms)
May 11 21:23:25.383: INFO: (13) /api/v1/namespaces/proxy-4732/services/proxy-service-fdc8l:portname2/proxy/: bar (200; 4.619801ms)
May 11 21:23:25.383: INFO: (13) /api/v1/namespaces/proxy-4732/services/https:proxy-service-fdc8l:tlsportname1/proxy/: tls baz (200; 4.709118ms)
May 11 21:23:25.383: INFO: (13) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:1080/proxy/: test<... (200; 4.719141ms)
May 11 21:23:25.383: INFO: (13) /api/v1/namespaces/proxy-4732/services/https:proxy-service-fdc8l:tlsportname2/proxy/: tls qux (200; 4.92466ms)
May 11 21:23:25.383: INFO: (13) /api/v1/namespaces/proxy-4732/services/proxy-service-fdc8l:portname1/proxy/: foo (200; 5.191812ms)
May 11 21:23:25.389: INFO: (14) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:462/proxy/: tls qux (200; 5.324587ms)
May 11 21:23:25.389: INFO: (14) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:160/proxy/: foo (200; 5.343436ms)
May 11 21:23:25.389: INFO: (14) /api/v1/namespaces/proxy-4732/services/http:proxy-service-fdc8l:portname1/proxy/: foo (200; 5.664714ms)
May 11 21:23:25.389: INFO: (14) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:162/proxy/: bar (200; 5.820901ms)
May 11 21:23:25.389: INFO: (14) /api/v1/namespaces/proxy-4732/services/http:proxy-service-fdc8l:portname2/proxy/: bar (200; 5.875982ms)
May 11 21:23:25.389: INFO: (14) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:1080/proxy/: ... (200; 6.023736ms)
May 11 21:23:25.389: INFO: (14) /api/v1/namespaces/proxy-4732/services/https:proxy-service-fdc8l:tlsportname1/proxy/: tls baz (200; 5.915619ms)
May 11 21:23:25.390: INFO: (14) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:162/proxy/: bar (200; 6.275641ms)
May 11 21:23:25.390: INFO: (14) /api/v1/namespaces/proxy-4732/services/proxy-service-fdc8l:portname1/proxy/: foo (200; 6.505614ms)
May 11 21:23:25.390: INFO: (14) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:443/proxy/: test<... (200; 6.575378ms)
May 11 21:23:25.390: INFO: (14) /api/v1/namespaces/proxy-4732/services/https:proxy-service-fdc8l:tlsportname2/proxy/: tls qux (200; 6.612347ms)
May 11 21:23:25.427: INFO: (14) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:160/proxy/: foo (200; 43.82944ms)
May 11 21:23:25.427: INFO: (14) /api/v1/namespaces/proxy-4732/services/proxy-service-fdc8l:portname2/proxy/: bar (200; 43.809652ms)
May 11 21:23:25.427: INFO: (14) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:460/proxy/: tls baz (200; 43.842364ms)
May 11 21:23:25.427: INFO: (14) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868/proxy/: test (200; 43.877966ms)
May 11 21:23:25.430: INFO: (15) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:162/proxy/: bar (200; 2.854181ms)
May 11 21:23:25.430: INFO: (15) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:1080/proxy/: test<... (200; 3.204249ms)
May 11 21:23:25.432: INFO: (15) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:1080/proxy/: ... (200; 4.952406ms)
May 11 21:23:25.433: INFO: (15) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:462/proxy/: tls qux (200; 5.328802ms)
May 11 21:23:25.433: INFO: (15) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868/proxy/: test (200; 5.797986ms)
May 11 21:23:25.433: INFO: (15) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:160/proxy/: foo (200; 5.847102ms)
May 11 21:23:25.433: INFO: (15) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:162/proxy/: bar (200; 5.871225ms)
May 11 21:23:25.433: INFO: (15) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:460/proxy/: tls baz (200; 5.870526ms)
May 11 21:23:25.433: INFO: (15) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:443/proxy/: test (200; 5.323363ms)
May 11 21:23:25.441: INFO: (16) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:460/proxy/: tls baz (200; 5.303056ms)
May 11 21:23:25.441: INFO: (16) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:462/proxy/: tls qux (200; 5.412516ms)
May 11 21:23:25.441: INFO: (16) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:162/proxy/: bar (200; 5.428801ms)
May 11 21:23:25.441: INFO: (16) /api/v1/namespaces/proxy-4732/services/http:proxy-service-fdc8l:portname2/proxy/: bar (200; 5.387852ms)
May 11 21:23:25.441: INFO: (16) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:1080/proxy/: ... (200; 5.374265ms)
May 11 21:23:25.441: INFO: (16) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:1080/proxy/: test<... (200; 5.388908ms)
May 11 21:23:25.443: INFO: (16) /api/v1/namespaces/proxy-4732/services/http:proxy-service-fdc8l:portname1/proxy/: foo (200; 7.266809ms)
May 11 21:23:25.443: INFO: (16) /api/v1/namespaces/proxy-4732/services/https:proxy-service-fdc8l:tlsportname1/proxy/: tls baz (200; 7.265164ms)
May 11 21:23:25.443: INFO: (16) /api/v1/namespaces/proxy-4732/services/proxy-service-fdc8l:portname1/proxy/: foo (200; 7.242533ms)
May 11 21:23:25.443: INFO: (16) /api/v1/namespaces/proxy-4732/services/proxy-service-fdc8l:portname2/proxy/: bar (200; 7.309631ms)
May 11 21:23:25.443: INFO: (16) /api/v1/namespaces/proxy-4732/services/https:proxy-service-fdc8l:tlsportname2/proxy/: tls qux (200; 7.415321ms)
May 11 21:23:25.448: INFO: (17) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:1080/proxy/: ... (200; 4.617627ms)
May 11 21:23:25.448: INFO: (17) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:162/proxy/: bar (200; 4.658136ms)
May 11 21:23:25.448: INFO: (17) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:1080/proxy/: test<... (200; 5.16191ms)
May 11 21:23:25.448: INFO: (17) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:462/proxy/: tls qux (200; 5.181471ms)
May 11 21:23:25.448: INFO: (17) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:160/proxy/: foo (200; 5.236455ms)
May 11 21:23:25.448: INFO: (17) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868/proxy/: test (200; 5.211501ms)
May 11 21:23:25.448: INFO: (17) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:443/proxy/: test<... (200; 3.882305ms)
May 11 21:23:25.455: INFO: (18) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:460/proxy/: tls baz (200; 3.940691ms)
May 11 21:23:25.456: INFO: (18) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:1080/proxy/: ... (200; 5.05158ms)
May 11 21:23:25.456: INFO: (18) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:443/proxy/: test (200; 5.1172ms)
May 11 21:23:25.456: INFO: (18) /api/v1/namespaces/proxy-4732/services/proxy-service-fdc8l:portname2/proxy/: bar (200; 5.240032ms)
May 11 21:23:25.458: INFO: (18) /api/v1/namespaces/proxy-4732/services/proxy-service-fdc8l:portname1/proxy/: foo (200; 6.307546ms)
May 11 21:23:25.458: INFO: (18) /api/v1/namespaces/proxy-4732/services/http:proxy-service-fdc8l:portname1/proxy/: foo (200; 6.32929ms)
May 11 21:23:25.458: INFO: (18) /api/v1/namespaces/proxy-4732/services/https:proxy-service-fdc8l:tlsportname2/proxy/: tls qux (200; 6.329617ms)
May 11 21:23:25.458: INFO: (18) /api/v1/namespaces/proxy-4732/services/http:proxy-service-fdc8l:portname2/proxy/: bar (200; 6.319207ms)
May 11 21:23:25.458: INFO: (18) /api/v1/namespaces/proxy-4732/services/https:proxy-service-fdc8l:tlsportname1/proxy/: tls baz (200; 6.573062ms)
May 11 21:23:25.464: INFO: (19) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868:160/proxy/: foo (200; 5.750322ms)
May 11 21:23:25.464: INFO: (19) /api/v1/namespaces/proxy-4732/services/https:proxy-service-fdc8l:tlsportname1/proxy/: tls baz (200; 5.836221ms)
May 11 21:23:25.464: INFO: (19) /api/v1/namespaces/proxy-4732/services/http:proxy-service-fdc8l:portname1/proxy/: foo (200; 5.811254ms)
May 11 21:23:25.464: INFO: (19) /api/v1/namespaces/proxy-4732/services/http:proxy-service-fdc8l:portname2/proxy/: bar (200; 5.797102ms)
May 11 21:23:25.464: INFO: (19) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:462/proxy/: tls qux (200; 5.786154ms)
May 11 21:23:25.464: INFO: (19) /api/v1/namespaces/proxy-4732/pods/proxy-service-fdc8l-fd868/proxy/: test (200; 5.893086ms)
May 11 21:23:25.464: INFO: (19) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:162/proxy/: bar (200; 5.946271ms)
May 11 21:23:25.464: INFO: (19) /api/v1/namespaces/proxy-4732/services/proxy-service-fdc8l:portname1/proxy/: foo (200; 5.941865ms)
May 11 21:23:25.464: INFO: (19) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:1080/proxy/: ... (200; 5.972414ms)
May 11 21:23:25.464: INFO: (19) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:460/proxy/: tls baz (200; 6.031684ms)
May 11 21:23:25.464: INFO: (19) /api/v1/namespaces/proxy-4732/pods/http:proxy-service-fdc8l-fd868:160/proxy/: foo (200; 6.039409ms)
May 11 21:23:25.464: INFO: (19) /api/v1/namespaces/proxy-4732/services/proxy-service-fdc8l:portname2/proxy/: bar (200; 5.967249ms)
May 11 21:23:25.464: INFO: (19) /api/v1/namespaces/proxy-4732/pods/https:proxy-service-fdc8l-fd868:443/proxy/: test<... (200; 7.174153ms)
STEP: deleting ReplicationController proxy-service-fdc8l in namespace proxy-4732, will wait for the garbage collector to delete the pods
May 11 21:23:25.521: INFO: Deleting ReplicationController proxy-service-fdc8l took: 4.081789ms
May 11 21:23:25.821: INFO: Terminating ReplicationController proxy-service-fdc8l pods took: 300.192233ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:23:33.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4732" for this suite.

• [SLOW TEST:22.041 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":275,"completed":139,"skipped":2406,"failed":0}
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:23:33.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
May 11 21:23:48.959: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May 11 21:23:48.968: INFO: Pod pod-with-poststart-exec-hook still exists
May 11 21:23:50.968: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May 11 21:23:51.446: INFO: Pod pod-with-poststart-exec-hook still exists
May 11 21:23:52.968: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May 11 21:23:53.015: INFO: Pod pod-with-poststart-exec-hook still exists
May 11 21:23:54.968: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May 11 21:23:54.972: INFO: Pod pod-with-poststart-exec-hook still exists
May 11 21:23:56.968: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May 11 21:23:56.971: INFO: Pod pod-with-poststart-exec-hook still exists
May 11 21:23:58.968: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May 11 21:23:58.971: INFO: Pod pod-with-poststart-exec-hook still exists
May 11 21:24:00.968: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May 11 21:24:01.164: INFO: Pod pod-with-poststart-exec-hook still exists
May 11 21:24:02.968: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May 11 21:24:02.971: INFO: Pod pod-with-poststart-exec-hook still exists
May 11 21:24:04.968: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May 11 21:24:04.972: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:24:04.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3327" for this suite.

• [SLOW TEST:31.449 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":140,"skipped":2407,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:24:04.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3638.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3638.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May 11 21:24:17.226: INFO: DNS probes using dns-3638/dns-test-2f599534-8dec-4b8a-a213-ff041fbaa11b succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:24:18.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3638" for this suite.

• [SLOW TEST:13.281 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":275,"completed":141,"skipped":2475,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:24:18.260: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-8388
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 11 21:24:18.611: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 11 21:24:18.983: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 11 21:24:21.104: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 11 21:24:22.987: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 11 21:24:25.178: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 11 21:24:26.985: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:24:28.990: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:24:30.987: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:24:32.986: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:24:35.255: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:24:37.171: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:24:38.986: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:24:40.986: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:24:43.200: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:24:45.044: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 11 21:24:45.048: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 11 21:24:51.197: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.104:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8388 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 11 21:24:51.197: INFO: >>> kubeConfig: /root/.kube/config
I0511 21:24:51.229398       7 log.go:172] (0xc001424370) (0xc002423ae0) Create stream
I0511 21:24:51.229423       7 log.go:172] (0xc001424370) (0xc002423ae0) Stream added, broadcasting: 1
I0511 21:24:51.231699       7 log.go:172] (0xc001424370) Reply frame received for 1
I0511 21:24:51.231748       7 log.go:172] (0xc001424370) (0xc002c03860) Create stream
I0511 21:24:51.231761       7 log.go:172] (0xc001424370) (0xc002c03860) Stream added, broadcasting: 3
I0511 21:24:51.232610       7 log.go:172] (0xc001424370) Reply frame received for 3
I0511 21:24:51.232635       7 log.go:172] (0xc001424370) (0xc002d345a0) Create stream
I0511 21:24:51.232647       7 log.go:172] (0xc001424370) (0xc002d345a0) Stream added, broadcasting: 5
I0511 21:24:51.233537       7 log.go:172] (0xc001424370) Reply frame received for 5
I0511 21:24:51.308029       7 log.go:172] (0xc001424370) Data frame received for 3
I0511 21:24:51.308053       7 log.go:172] (0xc002c03860) (3) Data frame handling
I0511 21:24:51.308071       7 log.go:172] (0xc002c03860) (3) Data frame sent
I0511 21:24:51.308083       7 log.go:172] (0xc001424370) Data frame received for 3
I0511 21:24:51.308097       7 log.go:172] (0xc002c03860) (3) Data frame handling
I0511 21:24:51.308130       7 log.go:172] (0xc001424370) Data frame received for 5
I0511 21:24:51.308145       7 log.go:172] (0xc002d345a0) (5) Data frame handling
I0511 21:24:51.309496       7 log.go:172] (0xc001424370) Data frame received for 1
I0511 21:24:51.309511       7 log.go:172] (0xc002423ae0) (1) Data frame handling
I0511 21:24:51.309522       7 log.go:172] (0xc002423ae0) (1) Data frame sent
I0511 21:24:51.309537       7 log.go:172] (0xc001424370) (0xc002423ae0) Stream removed, broadcasting: 1
I0511 21:24:51.309616       7 log.go:172] (0xc001424370) (0xc002423ae0) Stream removed, broadcasting: 1
I0511 21:24:51.309626       7 log.go:172] (0xc001424370) (0xc002c03860) Stream removed, broadcasting: 3
I0511 21:24:51.309726       7 log.go:172] (0xc001424370) (0xc002d345a0) Stream removed, broadcasting: 5
May 11 21:24:51.309: INFO: Found all expected endpoints: [netserver-0]
I0511 21:24:51.309978       7 log.go:172] (0xc001424370) Go away received
May 11 21:24:51.319: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.147:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8388 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 11 21:24:51.320: INFO: >>> kubeConfig: /root/.kube/config
I0511 21:24:51.342594       7 log.go:172] (0xc0014249a0) (0xc002423e00) Create stream
I0511 21:24:51.342629       7 log.go:172] (0xc0014249a0) (0xc002423e00) Stream added, broadcasting: 1
I0511 21:24:51.344861       7 log.go:172] (0xc0014249a0) Reply frame received for 1
I0511 21:24:51.344891       7 log.go:172] (0xc0014249a0) (0xc001b36500) Create stream
I0511 21:24:51.344899       7 log.go:172] (0xc0014249a0) (0xc001b36500) Stream added, broadcasting: 3
I0511 21:24:51.345714       7 log.go:172] (0xc0014249a0) Reply frame received for 3
I0511 21:24:51.345740       7 log.go:172] (0xc0014249a0) (0xc002d34780) Create stream
I0511 21:24:51.345749       7 log.go:172] (0xc0014249a0) (0xc002d34780) Stream added, broadcasting: 5
I0511 21:24:51.346376       7 log.go:172] (0xc0014249a0) Reply frame received for 5
I0511 21:24:51.396585       7 log.go:172] (0xc0014249a0) Data frame received for 3
I0511 21:24:51.396602       7 log.go:172] (0xc001b36500) (3) Data frame handling
I0511 21:24:51.396617       7 log.go:172] (0xc001b36500) (3) Data frame sent
I0511 21:24:51.396625       7 log.go:172] (0xc0014249a0) Data frame received for 3
I0511 21:24:51.396635       7 log.go:172] (0xc001b36500) (3) Data frame handling
I0511 21:24:51.397570       7 log.go:172] (0xc0014249a0) Data frame received for 5
I0511 21:24:51.397590       7 log.go:172] (0xc002d34780) (5) Data frame handling
I0511 21:24:51.398281       7 log.go:172] (0xc0014249a0) Data frame received for 1
I0511 21:24:51.398292       7 log.go:172] (0xc002423e00) (1) Data frame handling
I0511 21:24:51.398308       7 log.go:172] (0xc002423e00) (1) Data frame sent
I0511 21:24:51.398328       7 log.go:172] (0xc0014249a0) (0xc002423e00) Stream removed, broadcasting: 1
I0511 21:24:51.398399       7 log.go:172] (0xc0014249a0) (0xc002423e00) Stream removed, broadcasting: 1
I0511 21:24:51.398414       7 log.go:172] (0xc0014249a0) (0xc001b36500) Stream removed, broadcasting: 3
I0511 21:24:51.398493       7 log.go:172] (0xc0014249a0) (0xc002d34780) Stream removed, broadcasting: 5
May 11 21:24:51.398: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:24:51.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0511 21:24:51.398950       7 log.go:172] (0xc0014249a0) Go away received
STEP: Destroying namespace "pod-network-test-8388" for this suite.

• [SLOW TEST:33.144 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":142,"skipped":2488,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:24:51.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
May 11 21:25:02.158: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-3828 PodName:pod-sharedvolume-8603e4cd-178d-4491-8a16-9839d2ed734e ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 11 21:25:02.158: INFO: >>> kubeConfig: /root/.kube/config
I0511 21:25:02.209260       7 log.go:172] (0xc002127290) (0xc001266fa0) Create stream
I0511 21:25:02.209283       7 log.go:172] (0xc002127290) (0xc001266fa0) Stream added, broadcasting: 1
I0511 21:25:02.210267       7 log.go:172] (0xc002127290) Reply frame received for 1
I0511 21:25:02.210285       7 log.go:172] (0xc002127290) (0xc002d352c0) Create stream
I0511 21:25:02.210291       7 log.go:172] (0xc002127290) (0xc002d352c0) Stream added, broadcasting: 3
I0511 21:25:02.210809       7 log.go:172] (0xc002127290) Reply frame received for 3
I0511 21:25:02.210834       7 log.go:172] (0xc002127290) (0xc001b5c280) Create stream
I0511 21:25:02.210846       7 log.go:172] (0xc002127290) (0xc001b5c280) Stream added, broadcasting: 5
I0511 21:25:02.211436       7 log.go:172] (0xc002127290) Reply frame received for 5
I0511 21:25:02.260836       7 log.go:172] (0xc002127290) Data frame received for 3
I0511 21:25:02.260855       7 log.go:172] (0xc002d352c0) (3) Data frame handling
I0511 21:25:02.260869       7 log.go:172] (0xc002d352c0) (3) Data frame sent
I0511 21:25:02.260874       7 log.go:172] (0xc002127290) Data frame received for 3
I0511 21:25:02.260880       7 log.go:172] (0xc002d352c0) (3) Data frame handling
I0511 21:25:02.260956       7 log.go:172] (0xc002127290) Data frame received for 5
I0511 21:25:02.260974       7 log.go:172] (0xc001b5c280) (5) Data frame handling
I0511 21:25:02.262385       7 log.go:172] (0xc002127290) Data frame received for 1
I0511 21:25:02.262552       7 log.go:172] (0xc001266fa0) (1) Data frame handling
I0511 21:25:02.262569       7 log.go:172] (0xc001266fa0) (1) Data frame sent
I0511 21:25:02.262602       7 log.go:172] (0xc002127290) (0xc001266fa0) Stream removed, broadcasting: 1
I0511 21:25:02.262619       7 log.go:172] (0xc002127290) Go away received
I0511 21:25:02.262890       7 log.go:172] (0xc002127290) (0xc001266fa0) Stream removed, broadcasting: 1
I0511 21:25:02.262904       7 log.go:172] (0xc002127290) (0xc002d352c0) Stream removed, broadcasting: 3
I0511 21:25:02.262916       7 log.go:172] (0xc002127290) (0xc001b5c280) Stream removed, broadcasting: 5
May 11 21:25:02.262: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:25:02.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3828" for this suite.

• [SLOW TEST:11.016 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":143,"skipped":2497,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:25:02.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
May 11 21:25:07.535: INFO: Successfully updated pod "annotationupdatef5291426-74f4-49a5-954e-bbf457429954"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:25:11.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8119" for this suite.

• [SLOW TEST:9.175 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":144,"skipped":2499,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:25:11.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-103ce0c6-534e-4769-9574-02abc3068c1c
STEP: Creating a pod to test consume secrets
May 11 21:25:12.060: INFO: Waiting up to 5m0s for pod "pod-secrets-2545d214-efa9-4de3-ad69-5879e605db35" in namespace "secrets-4780" to be "Succeeded or Failed"
May 11 21:25:12.458: INFO: Pod "pod-secrets-2545d214-efa9-4de3-ad69-5879e605db35": Phase="Pending", Reason="", readiness=false. Elapsed: 398.442779ms
May 11 21:25:14.491: INFO: Pod "pod-secrets-2545d214-efa9-4de3-ad69-5879e605db35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.431368203s
May 11 21:25:16.644: INFO: Pod "pod-secrets-2545d214-efa9-4de3-ad69-5879e605db35": Phase="Pending", Reason="", readiness=false. Elapsed: 4.583868009s
May 11 21:25:18.670: INFO: Pod "pod-secrets-2545d214-efa9-4de3-ad69-5879e605db35": Phase="Running", Reason="", readiness=true. Elapsed: 6.609904007s
May 11 21:25:20.685: INFO: Pod "pod-secrets-2545d214-efa9-4de3-ad69-5879e605db35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.62548033s
STEP: Saw pod success
May 11 21:25:20.685: INFO: Pod "pod-secrets-2545d214-efa9-4de3-ad69-5879e605db35" satisfied condition "Succeeded or Failed"
May 11 21:25:20.687: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-2545d214-efa9-4de3-ad69-5879e605db35 container secret-volume-test: 
STEP: delete the pod
May 11 21:25:21.034: INFO: Waiting for pod pod-secrets-2545d214-efa9-4de3-ad69-5879e605db35 to disappear
May 11 21:25:21.588: INFO: Pod pod-secrets-2545d214-efa9-4de3-ad69-5879e605db35 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:25:21.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4780" for this suite.

• [SLOW TEST:10.154 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":145,"skipped":2514,"failed":0}
SSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:25:21.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
May 11 21:25:32.758: INFO: Successfully updated pod "pod-update-activedeadlineseconds-70f9f8f2-bc29-42c8-b173-46adf32fdecc"
May 11 21:25:32.758: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-70f9f8f2-bc29-42c8-b173-46adf32fdecc" in namespace "pods-4516" to be "terminated due to deadline exceeded"
May 11 21:25:32.792: INFO: Pod "pod-update-activedeadlineseconds-70f9f8f2-bc29-42c8-b173-46adf32fdecc": Phase="Running", Reason="", readiness=true. Elapsed: 34.157529ms
May 11 21:25:34.796: INFO: Pod "pod-update-activedeadlineseconds-70f9f8f2-bc29-42c8-b173-46adf32fdecc": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.037928168s
May 11 21:25:34.796: INFO: Pod "pod-update-activedeadlineseconds-70f9f8f2-bc29-42c8-b173-46adf32fdecc" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:25:34.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4516" for this suite.

• [SLOW TEST:13.052 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":146,"skipped":2517,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:25:34.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 21:25:36.423: INFO: (0) /api/v1/nodes/kali-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:25:37.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5127" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":148,"skipped":2548,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:25:37.646: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:25:37.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9133" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":149,"skipped":2580,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:25:37.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-5271
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 11 21:25:38.105: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 11 21:25:38.249: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 11 21:25:40.830: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 11 21:25:42.453: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 11 21:25:44.619: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 11 21:25:46.876: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 11 21:25:48.327: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:25:50.968: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:25:52.333: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:25:54.476: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:25:56.252: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:25:58.252: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:26:00.327: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:26:02.260: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 11 21:26:04.252: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 11 21:26:04.257: INFO: The status of Pod netserver-1 is Running (Ready = false)
May 11 21:26:06.260: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 11 21:26:12.402: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.107 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5271 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 11 21:26:12.402: INFO: >>> kubeConfig: /root/.kube/config
I0511 21:26:12.429517       7 log.go:172] (0xc002127600) (0xc0014a1e00) Create stream
I0511 21:26:12.429536       7 log.go:172] (0xc002127600) (0xc0014a1e00) Stream added, broadcasting: 1
I0511 21:26:12.430499       7 log.go:172] (0xc002127600) Reply frame received for 1
I0511 21:26:12.430522       7 log.go:172] (0xc002127600) (0xc000f4a140) Create stream
I0511 21:26:12.430529       7 log.go:172] (0xc002127600) (0xc000f4a140) Stream added, broadcasting: 3
I0511 21:26:12.431145       7 log.go:172] (0xc002127600) Reply frame received for 3
I0511 21:26:12.431158       7 log.go:172] (0xc002127600) (0xc0014a1ea0) Create stream
I0511 21:26:12.431165       7 log.go:172] (0xc002127600) (0xc0014a1ea0) Stream added, broadcasting: 5
I0511 21:26:12.431772       7 log.go:172] (0xc002127600) Reply frame received for 5
I0511 21:26:13.482169       7 log.go:172] (0xc002127600) Data frame received for 3
I0511 21:26:13.482200       7 log.go:172] (0xc000f4a140) (3) Data frame handling
I0511 21:26:13.482227       7 log.go:172] (0xc000f4a140) (3) Data frame sent
I0511 21:26:13.482297       7 log.go:172] (0xc002127600) Data frame received for 3
I0511 21:26:13.482310       7 log.go:172] (0xc000f4a140) (3) Data frame handling
I0511 21:26:13.482328       7 log.go:172] (0xc002127600) Data frame received for 5
I0511 21:26:13.482355       7 log.go:172] (0xc0014a1ea0) (5) Data frame handling
I0511 21:26:13.484163       7 log.go:172] (0xc002127600) Data frame received for 1
I0511 21:26:13.484182       7 log.go:172] (0xc0014a1e00) (1) Data frame handling
I0511 21:26:13.484215       7 log.go:172] (0xc0014a1e00) (1) Data frame sent
I0511 21:26:13.484283       7 log.go:172] (0xc002127600) (0xc0014a1e00) Stream removed, broadcasting: 1
I0511 21:26:13.484406       7 log.go:172] (0xc002127600) (0xc0014a1e00) Stream removed, broadcasting: 1
I0511 21:26:13.484433       7 log.go:172] (0xc002127600) (0xc000f4a140) Stream removed, broadcasting: 3
I0511 21:26:13.484450       7 log.go:172] (0xc002127600) (0xc0014a1ea0) Stream removed, broadcasting: 5
May 11 21:26:13.484: INFO: Found all expected endpoints: [netserver-0]
I0511 21:26:13.484507       7 log.go:172] (0xc002127600) Go away received
May 11 21:26:13.487: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.151 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5271 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 11 21:26:13.487: INFO: >>> kubeConfig: /root/.kube/config
I0511 21:26:13.516801       7 log.go:172] (0xc0014242c0) (0xc000f4a640) Create stream
I0511 21:26:13.516836       7 log.go:172] (0xc0014242c0) (0xc000f4a640) Stream added, broadcasting: 1
I0511 21:26:13.526244       7 log.go:172] (0xc0014242c0) Reply frame received for 1
I0511 21:26:13.526275       7 log.go:172] (0xc0014242c0) (0xc000f84be0) Create stream
I0511 21:26:13.526289       7 log.go:172] (0xc0014242c0) (0xc000f84be0) Stream added, broadcasting: 3
I0511 21:26:13.527187       7 log.go:172] (0xc0014242c0) Reply frame received for 3
I0511 21:26:13.527213       7 log.go:172] (0xc0014242c0) (0xc0016b2fa0) Create stream
I0511 21:26:13.527218       7 log.go:172] (0xc0014242c0) (0xc0016b2fa0) Stream added, broadcasting: 5
I0511 21:26:13.528337       7 log.go:172] (0xc0014242c0) Reply frame received for 5
I0511 21:26:14.582662       7 log.go:172] (0xc0014242c0) Data frame received for 3
I0511 21:26:14.582714       7 log.go:172] (0xc000f84be0) (3) Data frame handling
I0511 21:26:14.582749       7 log.go:172] (0xc000f84be0) (3) Data frame sent
I0511 21:26:14.582772       7 log.go:172] (0xc0014242c0) Data frame received for 3
I0511 21:26:14.582792       7 log.go:172] (0xc000f84be0) (3) Data frame handling
I0511 21:26:14.583199       7 log.go:172] (0xc0014242c0) Data frame received for 5
I0511 21:26:14.583231       7 log.go:172] (0xc0016b2fa0) (5) Data frame handling
I0511 21:26:14.584926       7 log.go:172] (0xc0014242c0) Data frame received for 1
I0511 21:26:14.584963       7 log.go:172] (0xc000f4a640) (1) Data frame handling
I0511 21:26:14.585013       7 log.go:172] (0xc000f4a640) (1) Data frame sent
I0511 21:26:14.585043       7 log.go:172] (0xc0014242c0) (0xc000f4a640) Stream removed, broadcasting: 1
I0511 21:26:14.585085       7 log.go:172] (0xc0014242c0) Go away received
I0511 21:26:14.585585       7 log.go:172] (0xc0014242c0) (0xc000f4a640) Stream removed, broadcasting: 1
I0511 21:26:14.585612       7 log.go:172] (0xc0014242c0) (0xc000f84be0) Stream removed, broadcasting: 3
I0511 21:26:14.585640       7 log.go:172] (0xc0014242c0) (0xc0016b2fa0) Stream removed, broadcasting: 5
May 11 21:26:14.585: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:26:14.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5271" for this suite.

• [SLOW TEST:36.665 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":150,"skipped":2633,"failed":0}
SSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:26:14.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:26:19.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2342" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":151,"skipped":2639,"failed":0}
SSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:26:19.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
May 11 21:26:25.838: INFO: Successfully updated pod "pod-update-159d5d89-4b5a-4ecd-bd29-b9f91f56884b"
STEP: verifying the updated pod is in kubernetes
May 11 21:26:25.905: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:26:25.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8776" for this suite.

• [SLOW TEST:6.853 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":152,"skipped":2642,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:26:25.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7229.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7229.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7229.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7229.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May 11 21:26:38.190: INFO: DNS probes using dns-test-e1d9e6aa-29ee-4981-b710-1618920d392a succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7229.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7229.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7229.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7229.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May 11 21:26:50.711: INFO: File wheezy_udp@dns-test-service-3.dns-7229.svc.cluster.local from pod  dns-7229/dns-test-5a761304-33a5-438e-a165-fc9479d50c72 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 11 21:26:50.714: INFO: File jessie_udp@dns-test-service-3.dns-7229.svc.cluster.local from pod  dns-7229/dns-test-5a761304-33a5-438e-a165-fc9479d50c72 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 11 21:26:50.714: INFO: Lookups using dns-7229/dns-test-5a761304-33a5-438e-a165-fc9479d50c72 failed for: [wheezy_udp@dns-test-service-3.dns-7229.svc.cluster.local jessie_udp@dns-test-service-3.dns-7229.svc.cluster.local]

May 11 21:26:55.718: INFO: File wheezy_udp@dns-test-service-3.dns-7229.svc.cluster.local from pod  dns-7229/dns-test-5a761304-33a5-438e-a165-fc9479d50c72 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 11 21:26:55.721: INFO: File jessie_udp@dns-test-service-3.dns-7229.svc.cluster.local from pod  dns-7229/dns-test-5a761304-33a5-438e-a165-fc9479d50c72 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 11 21:26:55.721: INFO: Lookups using dns-7229/dns-test-5a761304-33a5-438e-a165-fc9479d50c72 failed for: [wheezy_udp@dns-test-service-3.dns-7229.svc.cluster.local jessie_udp@dns-test-service-3.dns-7229.svc.cluster.local]

May 11 21:27:00.754: INFO: File wheezy_udp@dns-test-service-3.dns-7229.svc.cluster.local from pod  dns-7229/dns-test-5a761304-33a5-438e-a165-fc9479d50c72 contains '' instead of 'bar.example.com.'
May 11 21:27:00.759: INFO: File jessie_udp@dns-test-service-3.dns-7229.svc.cluster.local from pod  dns-7229/dns-test-5a761304-33a5-438e-a165-fc9479d50c72 contains '' instead of 'bar.example.com.'
May 11 21:27:00.759: INFO: Lookups using dns-7229/dns-test-5a761304-33a5-438e-a165-fc9479d50c72 failed for: [wheezy_udp@dns-test-service-3.dns-7229.svc.cluster.local jessie_udp@dns-test-service-3.dns-7229.svc.cluster.local]

May 11 21:27:05.722: INFO: DNS probes using dns-test-5a761304-33a5-438e-a165-fc9479d50c72 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7229.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7229.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7229.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7229.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May 11 21:27:14.680: INFO: DNS probes using dns-test-f6b0397a-9b70-4568-b59b-3825bb8fcd5e succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:27:14.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7229" for this suite.

• [SLOW TEST:48.997 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":153,"skipped":2658,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:27:14.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 11 21:27:17.698: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 11 21:27:21.290: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829238, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829238, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829240, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829237, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:27:23.430: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829238, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829238, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829240, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829237, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:27:25.442: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829238, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829238, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829240, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829237, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:27:27.358: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829238, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829238, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829240, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829237, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 11 21:27:32.013: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:27:33.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9127" for this suite.
STEP: Destroying namespace "webhook-9127-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:19.879 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":154,"skipped":2663,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:27:34.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-downwardapi-d5lv
STEP: Creating a pod to test atomic-volume-subpath
May 11 21:27:35.295: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-d5lv" in namespace "subpath-9685" to be "Succeeded or Failed"
May 11 21:27:35.431: INFO: Pod "pod-subpath-test-downwardapi-d5lv": Phase="Pending", Reason="", readiness=false. Elapsed: 136.565399ms
May 11 21:27:37.435: INFO: Pod "pod-subpath-test-downwardapi-d5lv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139850581s
May 11 21:27:39.581: INFO: Pod "pod-subpath-test-downwardapi-d5lv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.286497584s
May 11 21:27:41.586: INFO: Pod "pod-subpath-test-downwardapi-d5lv": Phase="Running", Reason="", readiness=true. Elapsed: 6.291307711s
May 11 21:27:43.589: INFO: Pod "pod-subpath-test-downwardapi-d5lv": Phase="Running", Reason="", readiness=true. Elapsed: 8.294399668s
May 11 21:27:45.593: INFO: Pod "pod-subpath-test-downwardapi-d5lv": Phase="Running", Reason="", readiness=true. Elapsed: 10.297840288s
May 11 21:27:47.596: INFO: Pod "pod-subpath-test-downwardapi-d5lv": Phase="Running", Reason="", readiness=true. Elapsed: 12.301435151s
May 11 21:27:49.599: INFO: Pod "pod-subpath-test-downwardapi-d5lv": Phase="Running", Reason="", readiness=true. Elapsed: 14.304245809s
May 11 21:27:51.832: INFO: Pod "pod-subpath-test-downwardapi-d5lv": Phase="Running", Reason="", readiness=true. Elapsed: 16.536944891s
May 11 21:27:53.836: INFO: Pod "pod-subpath-test-downwardapi-d5lv": Phase="Running", Reason="", readiness=true. Elapsed: 18.540814812s
May 11 21:27:55.840: INFO: Pod "pod-subpath-test-downwardapi-d5lv": Phase="Running", Reason="", readiness=true. Elapsed: 20.544852638s
May 11 21:27:57.844: INFO: Pod "pod-subpath-test-downwardapi-d5lv": Phase="Running", Reason="", readiness=true. Elapsed: 22.548876064s
May 11 21:27:59.935: INFO: Pod "pod-subpath-test-downwardapi-d5lv": Phase="Running", Reason="", readiness=true. Elapsed: 24.640351791s
May 11 21:28:01.939: INFO: Pod "pod-subpath-test-downwardapi-d5lv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.643965091s
STEP: Saw pod success
May 11 21:28:01.939: INFO: Pod "pod-subpath-test-downwardapi-d5lv" satisfied condition "Succeeded or Failed"
May 11 21:28:01.941: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-downwardapi-d5lv container test-container-subpath-downwardapi-d5lv: 
STEP: delete the pod
May 11 21:28:02.096: INFO: Waiting for pod pod-subpath-test-downwardapi-d5lv to disappear
May 11 21:28:02.126: INFO: Pod pod-subpath-test-downwardapi-d5lv no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-d5lv
May 11 21:28:02.126: INFO: Deleting pod "pod-subpath-test-downwardapi-d5lv" in namespace "subpath-9685"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:28:02.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9685" for this suite.

• [SLOW TEST:27.375 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":155,"skipped":2699,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:28:02.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:28:10.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7006" for this suite.

• [SLOW TEST:8.525 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":156,"skipped":2707,"failed":0}
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:28:10.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-9973
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating stateful set ss in namespace statefulset-9973
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9973
May 11 21:28:13.198: INFO: Found 0 stateful pods, waiting for 1
May 11 21:28:23.203: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
May 11 21:28:23.207: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May 11 21:28:32.602: INFO: stderr: "I0511 21:28:32.459113    1897 log.go:172] (0xc000730fd0) (0xc000894fa0) Create stream\nI0511 21:28:32.459140    1897 log.go:172] (0xc000730fd0) (0xc000894fa0) Stream added, broadcasting: 1\nI0511 21:28:32.462700    1897 log.go:172] (0xc000730fd0) Reply frame received for 1\nI0511 21:28:32.462750    1897 log.go:172] (0xc000730fd0) (0xc00031a780) Create stream\nI0511 21:28:32.462764    1897 log.go:172] (0xc000730fd0) (0xc00031a780) Stream added, broadcasting: 3\nI0511 21:28:32.463382    1897 log.go:172] (0xc000730fd0) Reply frame received for 3\nI0511 21:28:32.463413    1897 log.go:172] (0xc000730fd0) (0xc00031aa00) Create stream\nI0511 21:28:32.463426    1897 log.go:172] (0xc000730fd0) (0xc00031aa00) Stream added, broadcasting: 5\nI0511 21:28:32.464038    1897 log.go:172] (0xc000730fd0) Reply frame received for 5\nI0511 21:28:32.543344    1897 log.go:172] (0xc000730fd0) Data frame received for 5\nI0511 21:28:32.543378    1897 log.go:172] (0xc00031aa00) (5) Data frame handling\nI0511 21:28:32.543393    1897 log.go:172] (0xc00031aa00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 21:28:32.596278    1897 log.go:172] (0xc000730fd0) Data frame received for 3\nI0511 21:28:32.596299    1897 log.go:172] (0xc00031a780) (3) Data frame handling\nI0511 21:28:32.596314    1897 log.go:172] (0xc00031a780) (3) Data frame sent\nI0511 21:28:32.596756    1897 log.go:172] (0xc000730fd0) Data frame received for 5\nI0511 21:28:32.596775    1897 log.go:172] (0xc00031aa00) (5) Data frame handling\nI0511 21:28:32.596810    1897 log.go:172] (0xc000730fd0) Data frame received for 3\nI0511 21:28:32.596823    1897 log.go:172] (0xc00031a780) (3) Data frame handling\nI0511 21:28:32.598724    1897 log.go:172] (0xc000730fd0) Data frame received for 1\nI0511 21:28:32.598742    1897 log.go:172] (0xc000894fa0) (1) Data frame handling\nI0511 21:28:32.598769    1897 log.go:172] (0xc000894fa0) (1) Data frame sent\nI0511 21:28:32.598808    1897 log.go:172] (0xc000730fd0) (0xc000894fa0) Stream removed, broadcasting: 1\nI0511 21:28:32.598832    1897 log.go:172] (0xc000730fd0) Go away received\nI0511 21:28:32.599188    1897 log.go:172] (0xc000730fd0) (0xc000894fa0) Stream removed, broadcasting: 1\nI0511 21:28:32.599218    1897 log.go:172] (0xc000730fd0) (0xc00031a780) Stream removed, broadcasting: 3\nI0511 21:28:32.599227    1897 log.go:172] (0xc000730fd0) (0xc00031aa00) Stream removed, broadcasting: 5\n"
May 11 21:28:32.602: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May 11 21:28:32.602: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May 11 21:28:32.609: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
May 11 21:28:43.115: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
May 11 21:28:43.115: INFO: Waiting for statefulset status.replicas updated to 0
May 11 21:28:45.017: INFO: POD   NODE         PHASE    GRACE  CONDITIONS
May 11 21:28:45.017: INFO: ss-0  kali-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:13 +0000 UTC  }]
May 11 21:28:45.017: INFO: 
May 11 21:28:45.017: INFO: StatefulSet ss has not reached scale 3, at 1
May 11 21:28:46.053: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.554507667s
May 11 21:28:47.220: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.51862909s
May 11 21:28:49.874: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.351200406s
May 11 21:28:51.760: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.697883086s
May 11 21:28:52.847: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.81092135s
May 11 21:28:53.865: INFO: Verifying statefulset ss doesn't scale past 3 for another 724.269478ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9973
May 11 21:28:54.869: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:28:55.067: INFO: stderr: "I0511 21:28:55.001394    1928 log.go:172] (0xc00092a0b0) (0xc0004d2aa0) Create stream\nI0511 21:28:55.001467    1928 log.go:172] (0xc00092a0b0) (0xc0004d2aa0) Stream added, broadcasting: 1\nI0511 21:28:55.002869    1928 log.go:172] (0xc00092a0b0) Reply frame received for 1\nI0511 21:28:55.002914    1928 log.go:172] (0xc00092a0b0) (0xc00099e000) Create stream\nI0511 21:28:55.002928    1928 log.go:172] (0xc00092a0b0) (0xc00099e000) Stream added, broadcasting: 3\nI0511 21:28:55.003786    1928 log.go:172] (0xc00092a0b0) Reply frame received for 3\nI0511 21:28:55.003819    1928 log.go:172] (0xc00092a0b0) (0xc000709220) Create stream\nI0511 21:28:55.003830    1928 log.go:172] (0xc00092a0b0) (0xc000709220) Stream added, broadcasting: 5\nI0511 21:28:55.004609    1928 log.go:172] (0xc00092a0b0) Reply frame received for 5\nI0511 21:28:55.061842    1928 log.go:172] (0xc00092a0b0) Data frame received for 5\nI0511 21:28:55.061874    1928 log.go:172] (0xc000709220) (5) Data frame handling\nI0511 21:28:55.061886    1928 log.go:172] (0xc000709220) (5) Data frame sent\nI0511 21:28:55.061893    1928 log.go:172] (0xc00092a0b0) Data frame received for 5\nI0511 21:28:55.061900    1928 log.go:172] (0xc000709220) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 21:28:55.061921    1928 log.go:172] (0xc00092a0b0) Data frame received for 3\nI0511 21:28:55.061928    1928 log.go:172] (0xc00099e000) (3) Data frame handling\nI0511 21:28:55.061936    1928 log.go:172] (0xc00099e000) (3) Data frame sent\nI0511 21:28:55.061942    1928 log.go:172] (0xc00092a0b0) Data frame received for 3\nI0511 21:28:55.061948    1928 log.go:172] (0xc00099e000) (3) Data frame handling\nI0511 21:28:55.063263    1928 log.go:172] (0xc00092a0b0) Data frame received for 1\nI0511 21:28:55.063288    1928 log.go:172] (0xc0004d2aa0) (1) Data frame handling\nI0511 21:28:55.063313    1928 log.go:172] (0xc0004d2aa0) (1) Data frame sent\nI0511 21:28:55.063326    1928 log.go:172] (0xc00092a0b0) (0xc0004d2aa0) Stream removed, broadcasting: 1\nI0511 21:28:55.063473    1928 log.go:172] (0xc00092a0b0) Go away received\nI0511 21:28:55.063660    1928 log.go:172] (0xc00092a0b0) (0xc0004d2aa0) Stream removed, broadcasting: 1\nI0511 21:28:55.063682    1928 log.go:172] (0xc00092a0b0) (0xc00099e000) Stream removed, broadcasting: 3\nI0511 21:28:55.063692    1928 log.go:172] (0xc00092a0b0) (0xc000709220) Stream removed, broadcasting: 5\n"
May 11 21:28:55.067: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May 11 21:28:55.067: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May 11 21:28:55.067: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:28:55.281: INFO: stderr: "I0511 21:28:55.206761    1950 log.go:172] (0xc0009d14a0) (0xc00099e6e0) Create stream\nI0511 21:28:55.207014    1950 log.go:172] (0xc0009d14a0) (0xc00099e6e0) Stream added, broadcasting: 1\nI0511 21:28:55.210122    1950 log.go:172] (0xc0009d14a0) Reply frame received for 1\nI0511 21:28:55.210190    1950 log.go:172] (0xc0009d14a0) (0xc000ac6140) Create stream\nI0511 21:28:55.210214    1950 log.go:172] (0xc0009d14a0) (0xc000ac6140) Stream added, broadcasting: 3\nI0511 21:28:55.211507    1950 log.go:172] (0xc0009d14a0) Reply frame received for 3\nI0511 21:28:55.211577    1950 log.go:172] (0xc0009d14a0) (0xc00099e780) Create stream\nI0511 21:28:55.211616    1950 log.go:172] (0xc0009d14a0) (0xc00099e780) Stream added, broadcasting: 5\nI0511 21:28:55.212829    1950 log.go:172] (0xc0009d14a0) Reply frame received for 5\nI0511 21:28:55.274486    1950 log.go:172] (0xc0009d14a0) Data frame received for 5\nI0511 21:28:55.274545    1950 log.go:172] (0xc00099e780) (5) Data frame handling\nI0511 21:28:55.274560    1950 log.go:172] (0xc00099e780) (5) Data frame sent\nI0511 21:28:55.274570    1950 log.go:172] (0xc0009d14a0) Data frame received for 5\nI0511 21:28:55.274579    1950 log.go:172] (0xc00099e780) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0511 21:28:55.274602    1950 log.go:172] (0xc0009d14a0) Data frame received for 3\nI0511 21:28:55.274611    1950 log.go:172] (0xc000ac6140) (3) Data frame handling\nI0511 21:28:55.274628    1950 log.go:172] (0xc000ac6140) (3) Data frame sent\nI0511 21:28:55.274645    1950 log.go:172] (0xc0009d14a0) Data frame received for 3\nI0511 21:28:55.274662    1950 log.go:172] (0xc000ac6140) (3) Data frame handling\nI0511 21:28:55.276344    1950 log.go:172] (0xc0009d14a0) Data frame received for 1\nI0511 21:28:55.276380    1950 log.go:172] (0xc00099e6e0) (1) Data frame handling\nI0511 21:28:55.276412    1950 log.go:172] (0xc00099e6e0) (1) Data frame sent\nI0511 21:28:55.276433    1950 log.go:172] (0xc0009d14a0) (0xc00099e6e0) Stream removed, broadcasting: 1\nI0511 21:28:55.276656    1950 log.go:172] (0xc0009d14a0) Go away received\nI0511 21:28:55.276942    1950 log.go:172] (0xc0009d14a0) (0xc00099e6e0) Stream removed, broadcasting: 1\nI0511 21:28:55.276968    1950 log.go:172] (0xc0009d14a0) (0xc000ac6140) Stream removed, broadcasting: 3\nI0511 21:28:55.276982    1950 log.go:172] (0xc0009d14a0) (0xc00099e780) Stream removed, broadcasting: 5\n"
May 11 21:28:55.281: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May 11 21:28:55.281: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May 11 21:28:55.281: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:28:55.463: INFO: stderr: "I0511 21:28:55.401428    1969 log.go:172] (0xc000781080) (0xc00097a5a0) Create stream\nI0511 21:28:55.401481    1969 log.go:172] (0xc000781080) (0xc00097a5a0) Stream added, broadcasting: 1\nI0511 21:28:55.404340    1969 log.go:172] (0xc000781080) Reply frame received for 1\nI0511 21:28:55.404717    1969 log.go:172] (0xc000781080) (0xc0008ce0a0) Create stream\nI0511 21:28:55.404796    1969 log.go:172] (0xc000781080) (0xc0008ce0a0) Stream added, broadcasting: 3\nI0511 21:28:55.406033    1969 log.go:172] (0xc000781080) Reply frame received for 3\nI0511 21:28:55.406067    1969 log.go:172] (0xc000781080) (0xc0008ce140) Create stream\nI0511 21:28:55.406076    1969 log.go:172] (0xc000781080) (0xc0008ce140) Stream added, broadcasting: 5\nI0511 21:28:55.406820    1969 log.go:172] (0xc000781080) Reply frame received for 5\nI0511 21:28:55.456682    1969 log.go:172] (0xc000781080) Data frame received for 5\nI0511 21:28:55.456707    1969 log.go:172] (0xc0008ce140) (5) Data frame handling\nI0511 21:28:55.456715    1969 log.go:172] (0xc0008ce140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0511 21:28:55.456760    1969 log.go:172] (0xc000781080) Data frame received for 3\nI0511 21:28:55.456815    1969 log.go:172] (0xc0008ce0a0) (3) Data frame handling\nI0511 21:28:55.456928    1969 log.go:172] (0xc0008ce0a0) (3) Data frame sent\nI0511 21:28:55.456955    1969 log.go:172] (0xc000781080) Data frame received for 3\nI0511 21:28:55.456969    1969 log.go:172] (0xc000781080) Data frame received for 5\nI0511 21:28:55.456983    1969 log.go:172] (0xc0008ce140) (5) Data frame handling\nI0511 21:28:55.457000    1969 log.go:172] (0xc0008ce0a0) (3) Data frame handling\nI0511 21:28:55.458736    1969 log.go:172] (0xc000781080) Data frame received for 1\nI0511 21:28:55.458763    1969 log.go:172] (0xc00097a5a0) (1) Data frame handling\nI0511 21:28:55.458778    1969 log.go:172] (0xc00097a5a0) (1) Data frame sent\nI0511 21:28:55.458797    1969 log.go:172] (0xc000781080) (0xc00097a5a0) Stream removed, broadcasting: 1\nI0511 21:28:55.458887    1969 log.go:172] (0xc000781080) Go away received\nI0511 21:28:55.459088    1969 log.go:172] (0xc000781080) (0xc00097a5a0) Stream removed, broadcasting: 1\nI0511 21:28:55.459103    1969 log.go:172] (0xc000781080) (0xc0008ce0a0) Stream removed, broadcasting: 3\nI0511 21:28:55.459115    1969 log.go:172] (0xc000781080) (0xc0008ce140) Stream removed, broadcasting: 5\n"
May 11 21:28:55.463: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May 11 21:28:55.463: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May 11 21:28:55.466: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
May 11 21:29:05.471: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
May 11 21:29:05.471: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
May 11 21:29:05.471: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
May 11 21:29:05.474: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May 11 21:29:05.659: INFO: stderr: "I0511 21:29:05.591406    1990 log.go:172] (0xc000672370) (0xc000666140) Create stream\nI0511 21:29:05.591452    1990 log.go:172] (0xc000672370) (0xc000666140) Stream added, broadcasting: 1\nI0511 21:29:05.593463    1990 log.go:172] (0xc000672370) Reply frame received for 1\nI0511 21:29:05.593509    1990 log.go:172] (0xc000672370) (0xc0009a2000) Create stream\nI0511 21:29:05.593522    1990 log.go:172] (0xc000672370) (0xc0009a2000) Stream added, broadcasting: 3\nI0511 21:29:05.594293    1990 log.go:172] (0xc000672370) Reply frame received for 3\nI0511 21:29:05.594322    1990 log.go:172] (0xc000672370) (0xc0006661e0) Create stream\nI0511 21:29:05.594330    1990 log.go:172] (0xc000672370) (0xc0006661e0) Stream added, broadcasting: 5\nI0511 21:29:05.595055    1990 log.go:172] (0xc000672370) Reply frame received for 5\nI0511 21:29:05.654217    1990 log.go:172] (0xc000672370) Data frame received for 3\nI0511 21:29:05.654246    1990 log.go:172] (0xc0009a2000) (3) Data frame handling\nI0511 21:29:05.654267    1990 log.go:172] (0xc0009a2000) (3) Data frame sent\nI0511 21:29:05.654286    1990 log.go:172] (0xc000672370) Data frame received for 3\nI0511 21:29:05.654312    1990 log.go:172] (0xc0009a2000) (3) Data frame handling\nI0511 21:29:05.654438    1990 log.go:172] (0xc000672370) Data frame received for 5\nI0511 21:29:05.654458    1990 log.go:172] (0xc0006661e0) (5) Data frame handling\nI0511 21:29:05.654481    1990 log.go:172] (0xc0006661e0) (5) Data frame sent\nI0511 21:29:05.654495    1990 log.go:172] (0xc000672370) Data frame received for 5\nI0511 21:29:05.654509    1990 log.go:172] (0xc0006661e0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 21:29:05.656011    1990 log.go:172] (0xc000672370) Data frame received for 1\nI0511 21:29:05.656034    1990 log.go:172] (0xc000666140) (1) Data frame handling\nI0511 21:29:05.656057    1990 log.go:172] (0xc000666140) (1) Data frame sent\nI0511 21:29:05.656073    1990 log.go:172] (0xc000672370) (0xc000666140) Stream removed, broadcasting: 1\nI0511 21:29:05.656093    1990 log.go:172] (0xc000672370) Go away received\nI0511 21:29:05.656347    1990 log.go:172] (0xc000672370) (0xc000666140) Stream removed, broadcasting: 1\nI0511 21:29:05.656364    1990 log.go:172] (0xc000672370) (0xc0009a2000) Stream removed, broadcasting: 3\nI0511 21:29:05.656377    1990 log.go:172] (0xc000672370) (0xc0006661e0) Stream removed, broadcasting: 5\n"
May 11 21:29:05.659: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May 11 21:29:05.659: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May 11 21:29:05.659: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May 11 21:29:05.944: INFO: stderr: "I0511 21:29:05.781514    2013 log.go:172] (0xc0009e2000) (0xc0005c17c0) Create stream\nI0511 21:29:05.781560    2013 log.go:172] (0xc0009e2000) (0xc0005c17c0) Stream added, broadcasting: 1\nI0511 21:29:05.783672    2013 log.go:172] (0xc0009e2000) Reply frame received for 1\nI0511 21:29:05.783691    2013 log.go:172] (0xc0009e2000) (0xc000416be0) Create stream\nI0511 21:29:05.783698    2013 log.go:172] (0xc0009e2000) (0xc000416be0) Stream added, broadcasting: 3\nI0511 21:29:05.784700    2013 log.go:172] (0xc0009e2000) Reply frame received for 3\nI0511 21:29:05.784723    2013 log.go:172] (0xc0009e2000) (0xc0009c0000) Create stream\nI0511 21:29:05.784729    2013 log.go:172] (0xc0009e2000) (0xc0009c0000) Stream added, broadcasting: 5\nI0511 21:29:05.785839    2013 log.go:172] (0xc0009e2000) Reply frame received for 5\nI0511 21:29:05.857060    2013 log.go:172] (0xc0009e2000) Data frame received for 5\nI0511 21:29:05.857080    2013 log.go:172] (0xc0009c0000) (5) Data frame handling\nI0511 21:29:05.857091    2013 log.go:172] (0xc0009c0000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 21:29:05.936357    2013 log.go:172] (0xc0009e2000) Data frame received for 3\nI0511 21:29:05.936397    2013 log.go:172] (0xc000416be0) (3) Data frame handling\nI0511 21:29:05.936420    2013 log.go:172] (0xc000416be0) (3) Data frame sent\nI0511 21:29:05.936579    2013 log.go:172] (0xc0009e2000) Data frame received for 3\nI0511 21:29:05.936620    2013 log.go:172] (0xc000416be0) (3) Data frame handling\nI0511 21:29:05.936735    2013 log.go:172] (0xc0009e2000) Data frame received for 5\nI0511 21:29:05.936759    2013 log.go:172] (0xc0009c0000) (5) Data frame handling\nI0511 21:29:05.939249    2013 log.go:172] (0xc0009e2000) Data frame received for 1\nI0511 21:29:05.939276    2013 log.go:172] (0xc0005c17c0) (1) Data frame handling\nI0511 21:29:05.939295    2013 log.go:172] (0xc0005c17c0) (1) Data frame sent\nI0511 21:29:05.939312    2013 log.go:172] (0xc0009e2000) (0xc0005c17c0) Stream removed, broadcasting: 1\nI0511 21:29:05.939521    2013 log.go:172] (0xc0009e2000) Go away received\nI0511 21:29:05.939694    2013 log.go:172] (0xc0009e2000) (0xc0005c17c0) Stream removed, broadcasting: 1\nI0511 21:29:05.939722    2013 log.go:172] (0xc0009e2000) (0xc000416be0) Stream removed, broadcasting: 3\nI0511 21:29:05.939744    2013 log.go:172] (0xc0009e2000) (0xc0009c0000) Stream removed, broadcasting: 5\n"
May 11 21:29:05.944: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May 11 21:29:05.944: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May 11 21:29:05.945: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May 11 21:29:06.258: INFO: stderr: "I0511 21:29:06.076948    2034 log.go:172] (0xc000a14c60) (0xc0009ea140) Create stream\nI0511 21:29:06.077007    2034 log.go:172] (0xc000a14c60) (0xc0009ea140) Stream added, broadcasting: 1\nI0511 21:29:06.079327    2034 log.go:172] (0xc000a14c60) Reply frame received for 1\nI0511 21:29:06.079363    2034 log.go:172] (0xc000a14c60) (0xc0005e9360) Create stream\nI0511 21:29:06.079376    2034 log.go:172] (0xc000a14c60) (0xc0005e9360) Stream added, broadcasting: 3\nI0511 21:29:06.080146    2034 log.go:172] (0xc000a14c60) Reply frame received for 3\nI0511 21:29:06.080178    2034 log.go:172] (0xc000a14c60) (0xc0009ea1e0) Create stream\nI0511 21:29:06.080197    2034 log.go:172] (0xc000a14c60) (0xc0009ea1e0) Stream added, broadcasting: 5\nI0511 21:29:06.080859    2034 log.go:172] (0xc000a14c60) Reply frame received for 5\nI0511 21:29:06.160980    2034 log.go:172] (0xc000a14c60) Data frame received for 5\nI0511 21:29:06.161002    2034 log.go:172] (0xc0009ea1e0) (5) Data frame handling\nI0511 21:29:06.161014    2034 log.go:172] (0xc0009ea1e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 21:29:06.248666    2034 log.go:172] (0xc000a14c60) Data frame received for 5\nI0511 21:29:06.248721    2034 log.go:172] (0xc0009ea1e0) (5) Data frame handling\nI0511 21:29:06.248755    2034 log.go:172] (0xc000a14c60) Data frame received for 3\nI0511 21:29:06.248775    2034 log.go:172] (0xc0005e9360) (3) Data frame handling\nI0511 21:29:06.248795    2034 log.go:172] (0xc0005e9360) (3) Data frame sent\nI0511 21:29:06.248815    2034 log.go:172] (0xc000a14c60) Data frame received for 3\nI0511 21:29:06.248842    2034 log.go:172] (0xc0005e9360) (3) Data frame handling\nI0511 21:29:06.250967    2034 log.go:172] (0xc000a14c60) Data frame received for 1\nI0511 21:29:06.250999    2034 log.go:172] (0xc0009ea140) (1) Data frame handling\nI0511 21:29:06.251019    2034 log.go:172] (0xc0009ea140) (1) Data frame sent\nI0511 21:29:06.251041    2034 log.go:172] (0xc000a14c60) (0xc0009ea140) Stream removed, broadcasting: 1\nI0511 21:29:06.251070    2034 log.go:172] (0xc000a14c60) Go away received\nI0511 21:29:06.251869    2034 log.go:172] (0xc000a14c60) (0xc0009ea140) Stream removed, broadcasting: 1\nI0511 21:29:06.251908    2034 log.go:172] (0xc000a14c60) (0xc0005e9360) Stream removed, broadcasting: 3\nI0511 21:29:06.251929    2034 log.go:172] (0xc000a14c60) (0xc0009ea1e0) Stream removed, broadcasting: 5\n"
May 11 21:29:06.258: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May 11 21:29:06.258: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May 11 21:29:06.258: INFO: Waiting for statefulset status.replicas updated to 0
May 11 21:29:06.261: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
May 11 21:29:16.267: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
May 11 21:29:16.267: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
May 11 21:29:16.267: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
May 11 21:29:16.305: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May 11 21:29:16.305: INFO: ss-0  kali-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:13 +0000 UTC  }]
May 11 21:29:16.305: INFO: ss-1  kali-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:45 +0000 UTC  }]
May 11 21:29:16.305: INFO: ss-2  kali-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:45 +0000 UTC  }]
May 11 21:29:16.305: INFO: 
May 11 21:29:16.305: INFO: StatefulSet ss has not reached scale 0, at 3
May 11 21:29:18.024: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May 11 21:29:18.024: INFO: ss-0  kali-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:13 +0000 UTC  }]
May 11 21:29:18.024: INFO: ss-1  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:45 +0000 UTC  }]
May 11 21:29:18.024: INFO: ss-2  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:45 +0000 UTC  }]
May 11 21:29:18.024: INFO: 
May 11 21:29:18.024: INFO: StatefulSet ss has not reached scale 0, at 3
May 11 21:29:19.138: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May 11 21:29:19.138: INFO: ss-0  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:13 +0000 UTC  }]
May 11 21:29:19.138: INFO: ss-1  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:45 +0000 UTC  }]
May 11 21:29:19.138: INFO: ss-2  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:45 +0000 UTC  }]
May 11 21:29:19.138: INFO: 
May 11 21:29:19.138: INFO: StatefulSet ss has not reached scale 0, at 3
May 11 21:29:20.215: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May 11 21:29:20.215: INFO: ss-0  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:13 +0000 UTC  }]
May 11 21:29:20.215: INFO: ss-1  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:45 +0000 UTC  }]
May 11 21:29:20.215: INFO: ss-2  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:45 +0000 UTC  }]
May 11 21:29:20.215: INFO: 
May 11 21:29:20.215: INFO: StatefulSet ss has not reached scale 0, at 3
May 11 21:29:21.445: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May 11 21:29:21.446: INFO: ss-0  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:13 +0000 UTC  }]
May 11 21:29:21.446: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:45 +0000 UTC  }]
May 11 21:29:21.446: INFO: ss-2  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:45 +0000 UTC  }]
May 11 21:29:21.446: INFO: 
May 11 21:29:21.446: INFO: StatefulSet ss has not reached scale 0, at 3
May 11 21:29:22.457: INFO: POD   NODE         PHASE    GRACE  CONDITIONS
May 11 21:29:22.457: INFO: ss-0  kali-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:13 +0000 UTC  }]
May 11 21:29:22.457: INFO: ss-2  kali-worker  Pending  0s     [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:45 +0000 UTC  }]
May 11 21:29:22.458: INFO: 
May 11 21:29:22.458: INFO: StatefulSet ss has not reached scale 0, at 2
May 11 21:29:23.725: INFO: POD   NODE         PHASE    GRACE  CONDITIONS
May 11 21:29:23.725: INFO: ss-0  kali-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:13 +0000 UTC  }]
May 11 21:29:23.725: INFO: 
May 11 21:29:23.725: INFO: StatefulSet ss has not reached scale 0, at 1
May 11 21:29:24.730: INFO: POD   NODE         PHASE    GRACE  CONDITIONS
May 11 21:29:24.730: INFO: ss-0  kali-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:13 +0000 UTC  }]
May 11 21:29:24.730: INFO: 
May 11 21:29:24.730: INFO: StatefulSet ss has not reached scale 0, at 1
May 11 21:29:25.733: INFO: POD   NODE         PHASE    GRACE  CONDITIONS
May 11 21:29:25.733: INFO: ss-0  kali-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:28:13 +0000 UTC  }]
May 11 21:29:25.733: INFO: 
May 11 21:29:25.733: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9973
May 11 21:29:26.741: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:29:26.866: INFO: rc: 1
May 11 21:29:26.866: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
May 11 21:29:36.866: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:29:36.952: INFO: rc: 1
May 11 21:29:36.952: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 11 21:29:46.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:29:47.040: INFO: rc: 1
May 11 21:29:47.040: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 11 21:29:57.041: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:29:57.149: INFO: rc: 1
May 11 21:29:57.149: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 11 21:30:07.150: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:30:07.238: INFO: rc: 1
May 11 21:30:07.238: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 11 21:30:17.238: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:30:17.339: INFO: rc: 1
May 11 21:30:17.339: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 11 21:30:27.339: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:30:27.426: INFO: rc: 1
May 11 21:30:27.426: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 11 21:30:37.427: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:30:37.511: INFO: rc: 1
May 11 21:30:37.511: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 11 21:30:47.512: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:30:47.598: INFO: rc: 1
May 11 21:30:47.598: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 11 21:30:57.598: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:30:57.696: INFO: rc: 1
May 11 21:30:57.696: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 11 21:31:07.696: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:31:07.784: INFO: rc: 1
May 11 21:31:07.784: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 11 21:31:17.785: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:31:17.886: INFO: rc: 1
May 11 21:31:17.886: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 11 21:31:27.886: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:31:27.982: INFO: rc: 1
May 11 21:31:27.982: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 11 21:31:37.983: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:31:38.218: INFO: rc: 1
May 11 21:31:38.218: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 11 21:31:48.218: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:31:48.320: INFO: rc: 1
May 11 21:31:48.320: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 11 21:31:58.320: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:31:58.414: INFO: rc: 1
May 11 21:31:58.414: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 11 21:32:08.415: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:32:08.516: INFO: rc: 1
May 11 21:32:08.516: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 11 21:32:18.516: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:32:18.607: INFO: rc: 1
May 11 21:32:18.607: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 11 21:32:28.607: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:32:28.711: INFO: rc: 1
May 11 21:32:28.711: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 11 21:32:38.712: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:32:39.432: INFO: rc: 1
May 11 21:32:39.432: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 11 21:32:49.432: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:32:49.671: INFO: rc: 1
May 11 21:32:49.671: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 11 21:32:59.672: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:32:59.751: INFO: rc: 1
May 11 21:32:59.752: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 11 21:33:09.752: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:33:09.838: INFO: rc: 1
May 11 21:33:09.838: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 11 21:33:19.838: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:33:19.926: INFO: rc: 1
May 11 21:33:19.926: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 11 21:33:29.926: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:33:30.043: INFO: rc: 1
May 11 21:33:30.043: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 11 21:33:40.044: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:33:40.140: INFO: rc: 1
May 11 21:33:40.140: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 11 21:33:50.141: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:33:50.450: INFO: rc: 1
May 11 21:33:50.450: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 11 21:34:00.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:34:00.549: INFO: rc: 1
May 11 21:34:00.550: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 11 21:34:10.550: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:34:10.651: INFO: rc: 1
May 11 21:34:10.651: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 11 21:34:20.652: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:34:20.739: INFO: rc: 1
May 11 21:34:20.739: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 11 21:34:30.739: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:34:31.531: INFO: rc: 1
May 11 21:34:31.531: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: 
May 11 21:34:31.531: INFO: Scaling statefulset ss to 0
May 11 21:34:31.813: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
May 11 21:34:31.834: INFO: Deleting all statefulset in ns statefulset-9973
May 11 21:34:31.837: INFO: Scaling statefulset ss to 0
May 11 21:34:31.845: INFO: Waiting for statefulset status.replicas updated to 0
May 11 21:34:31.846: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:34:31.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9973" for this suite.

• [SLOW TEST:381.183 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":157,"skipped":2707,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:34:31.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
May 11 21:34:33.766: INFO: >>> kubeConfig: /root/.kube/config
May 11 21:34:36.802: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:34:49.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8554" for this suite.

• [SLOW TEST:17.846 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":158,"skipped":2721,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:34:49.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-2b0b580c-43be-4bf2-a708-7c7f2847f122
STEP: Creating a pod to test consume secrets
May 11 21:34:50.043: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-914857e9-4a03-4240-971d-f2890906c27c" in namespace "projected-4419" to be "Succeeded or Failed"
May 11 21:34:50.102: INFO: Pod "pod-projected-secrets-914857e9-4a03-4240-971d-f2890906c27c": Phase="Pending", Reason="", readiness=false. Elapsed: 58.871852ms
May 11 21:34:52.369: INFO: Pod "pod-projected-secrets-914857e9-4a03-4240-971d-f2890906c27c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.325919535s
May 11 21:34:54.565: INFO: Pod "pod-projected-secrets-914857e9-4a03-4240-971d-f2890906c27c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.522175465s
May 11 21:34:56.570: INFO: Pod "pod-projected-secrets-914857e9-4a03-4240-971d-f2890906c27c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.526652339s
May 11 21:34:58.574: INFO: Pod "pod-projected-secrets-914857e9-4a03-4240-971d-f2890906c27c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.531319589s
STEP: Saw pod success
May 11 21:34:58.574: INFO: Pod "pod-projected-secrets-914857e9-4a03-4240-971d-f2890906c27c" satisfied condition "Succeeded or Failed"
May 11 21:34:58.578: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-914857e9-4a03-4240-971d-f2890906c27c container projected-secret-volume-test: 
STEP: delete the pod
May 11 21:34:58.662: INFO: Waiting for pod pod-projected-secrets-914857e9-4a03-4240-971d-f2890906c27c to disappear
May 11 21:34:58.670: INFO: Pod pod-projected-secrets-914857e9-4a03-4240-971d-f2890906c27c no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:34:58.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4419" for this suite.

• [SLOW TEST:8.957 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":159,"skipped":2733,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:34:58.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
May 11 21:34:58.747: INFO: namespace kubectl-7282
May 11 21:34:58.747: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7282'
May 11 21:34:59.143: INFO: stderr: ""
May 11 21:34:59.143: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
May 11 21:35:00.147: INFO: Selector matched 1 pods for map[app:agnhost]
May 11 21:35:00.147: INFO: Found 0 / 1
May 11 21:35:01.259: INFO: Selector matched 1 pods for map[app:agnhost]
May 11 21:35:01.259: INFO: Found 0 / 1
May 11 21:35:02.147: INFO: Selector matched 1 pods for map[app:agnhost]
May 11 21:35:02.147: INFO: Found 0 / 1
May 11 21:35:03.236: INFO: Selector matched 1 pods for map[app:agnhost]
May 11 21:35:03.236: INFO: Found 0 / 1
May 11 21:35:04.307: INFO: Selector matched 1 pods for map[app:agnhost]
May 11 21:35:04.307: INFO: Found 1 / 1
May 11 21:35:04.307: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
May 11 21:35:04.336: INFO: Selector matched 1 pods for map[app:agnhost]
May 11 21:35:04.336: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
May 11 21:35:04.336: INFO: wait on agnhost-master startup in kubectl-7282 
May 11 21:35:04.336: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs agnhost-master-p2z65 agnhost-master --namespace=kubectl-7282'
May 11 21:35:05.072: INFO: stderr: ""
May 11 21:35:05.072: INFO: stdout: "Paused\n"
STEP: exposing RC
May 11 21:35:05.072: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7282'
May 11 21:35:05.976: INFO: stderr: ""
May 11 21:35:05.976: INFO: stdout: "service/rm2 exposed\n"
May 11 21:35:06.588: INFO: Service rm2 in namespace kubectl-7282 found.
STEP: exposing service
May 11 21:35:08.594: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7282'
May 11 21:35:08.903: INFO: stderr: ""
May 11 21:35:08.903: INFO: stdout: "service/rm3 exposed\n"
May 11 21:35:08.965: INFO: Service rm3 in namespace kubectl-7282 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:35:10.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7282" for this suite.

• [SLOW TEST:12.297 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":275,"completed":160,"skipped":2746,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:35:10.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
May 11 21:35:11.227: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-355'
May 11 21:35:11.530: INFO: stderr: ""
May 11 21:35:11.530: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
May 11 21:35:12.571: INFO: Selector matched 1 pods for map[app:agnhost]
May 11 21:35:12.571: INFO: Found 0 / 1
May 11 21:35:13.535: INFO: Selector matched 1 pods for map[app:agnhost]
May 11 21:35:13.535: INFO: Found 0 / 1
May 11 21:35:14.614: INFO: Selector matched 1 pods for map[app:agnhost]
May 11 21:35:14.614: INFO: Found 0 / 1
May 11 21:35:15.536: INFO: Selector matched 1 pods for map[app:agnhost]
May 11 21:35:15.536: INFO: Found 0 / 1
May 11 21:35:16.578: INFO: Selector matched 1 pods for map[app:agnhost]
May 11 21:35:16.578: INFO: Found 0 / 1
May 11 21:35:17.533: INFO: Selector matched 1 pods for map[app:agnhost]
May 11 21:35:17.533: INFO: Found 1 / 1
May 11 21:35:17.533: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
May 11 21:35:17.536: INFO: Selector matched 1 pods for map[app:agnhost]
May 11 21:35:17.536: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
May 11 21:35:17.536: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config patch pod agnhost-master-qm9jc --namespace=kubectl-355 -p {"metadata":{"annotations":{"x":"y"}}}'
May 11 21:35:17.862: INFO: stderr: ""
May 11 21:35:17.862: INFO: stdout: "pod/agnhost-master-qm9jc patched\n"
STEP: checking annotations
May 11 21:35:17.885: INFO: Selector matched 1 pods for map[app:agnhost]
May 11 21:35:17.885: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:35:17.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-355" for this suite.

• [SLOW TEST:6.916 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":275,"completed":161,"skipped":2767,"failed":0}
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:35:17.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 21:35:18.019: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-45469e86-e3a6-4cb0-b4b2-1cdf7fa59b53" in namespace "security-context-test-8036" to be "Succeeded or Failed"
May 11 21:35:18.023: INFO: Pod "busybox-privileged-false-45469e86-e3a6-4cb0-b4b2-1cdf7fa59b53": Phase="Pending", Reason="", readiness=false. Elapsed: 4.53823ms
May 11 21:35:20.379: INFO: Pod "busybox-privileged-false-45469e86-e3a6-4cb0-b4b2-1cdf7fa59b53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.360513177s
May 11 21:35:22.383: INFO: Pod "busybox-privileged-false-45469e86-e3a6-4cb0-b4b2-1cdf7fa59b53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.364355408s
May 11 21:35:22.383: INFO: Pod "busybox-privileged-false-45469e86-e3a6-4cb0-b4b2-1cdf7fa59b53" satisfied condition "Succeeded or Failed"
May 11 21:35:22.390: INFO: Got logs for pod "busybox-privileged-false-45469e86-e3a6-4cb0-b4b2-1cdf7fa59b53": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:35:22.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8036" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":162,"skipped":2767,"failed":0}

------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:35:22.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:35:39.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8078" for this suite.

• [SLOW TEST:17.482 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":163,"skipped":2767,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:35:39.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-1505
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-1505
I0511 21:35:41.338522       7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-1505, replica count: 2
I0511 21:35:44.388950       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0511 21:35:47.389101       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May 11 21:35:47.389: INFO: Creating new exec pod
May 11 21:35:56.652: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-1505 execpod65dzn -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
May 11 21:35:56.869: INFO: stderr: "I0511 21:35:56.776652    2779 log.go:172] (0xc0003ee160) (0xc000331f40) Create stream\nI0511 21:35:56.776702    2779 log.go:172] (0xc0003ee160) (0xc000331f40) Stream added, broadcasting: 1\nI0511 21:35:56.782123    2779 log.go:172] (0xc0003ee160) Reply frame received for 1\nI0511 21:35:56.782155    2779 log.go:172] (0xc0003ee160) (0xc0009fc000) Create stream\nI0511 21:35:56.782164    2779 log.go:172] (0xc0003ee160) (0xc0009fc000) Stream added, broadcasting: 3\nI0511 21:35:56.783499    2779 log.go:172] (0xc0003ee160) Reply frame received for 3\nI0511 21:35:56.783528    2779 log.go:172] (0xc0003ee160) (0xc00062b220) Create stream\nI0511 21:35:56.783541    2779 log.go:172] (0xc0003ee160) (0xc00062b220) Stream added, broadcasting: 5\nI0511 21:35:56.784736    2779 log.go:172] (0xc0003ee160) Reply frame received for 5\nI0511 21:35:56.862044    2779 log.go:172] (0xc0003ee160) Data frame received for 5\nI0511 21:35:56.862070    2779 log.go:172] (0xc00062b220) (5) Data frame handling\nI0511 21:35:56.862086    2779 log.go:172] (0xc00062b220) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0511 21:35:56.862978    2779 log.go:172] (0xc0003ee160) Data frame received for 5\nI0511 21:35:56.862994    2779 log.go:172] (0xc00062b220) (5) Data frame handling\nI0511 21:35:56.863001    2779 log.go:172] (0xc00062b220) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0511 21:35:56.863271    2779 log.go:172] (0xc0003ee160) Data frame received for 3\nI0511 21:35:56.863291    2779 log.go:172] (0xc0009fc000) (3) Data frame handling\nI0511 21:35:56.863387    2779 log.go:172] (0xc0003ee160) Data frame received for 5\nI0511 21:35:56.863410    2779 log.go:172] (0xc00062b220) (5) Data frame handling\nI0511 21:35:56.865069    2779 log.go:172] (0xc0003ee160) Data frame received for 1\nI0511 21:35:56.865090    2779 log.go:172] (0xc000331f40) (1) Data frame handling\nI0511 21:35:56.865101    2779 log.go:172] (0xc000331f40) (1) Data frame sent\nI0511 21:35:56.865283    2779 log.go:172] (0xc0003ee160) (0xc000331f40) Stream removed, broadcasting: 1\nI0511 21:35:56.865314    2779 log.go:172] (0xc0003ee160) Go away received\nI0511 21:35:56.865739    2779 log.go:172] (0xc0003ee160) (0xc000331f40) Stream removed, broadcasting: 1\nI0511 21:35:56.865760    2779 log.go:172] (0xc0003ee160) (0xc0009fc000) Stream removed, broadcasting: 3\nI0511 21:35:56.865769    2779 log.go:172] (0xc0003ee160) (0xc00062b220) Stream removed, broadcasting: 5\n"
May 11 21:35:56.869: INFO: stdout: ""
May 11 21:35:56.870: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-1505 execpod65dzn -- /bin/sh -x -c nc -zv -t -w 2 10.101.176.245 80'
May 11 21:35:57.050: INFO: stderr: "I0511 21:35:56.984839    2800 log.go:172] (0xc00058e000) (0xc00060b540) Create stream\nI0511 21:35:56.984876    2800 log.go:172] (0xc00058e000) (0xc00060b540) Stream added, broadcasting: 1\nI0511 21:35:56.986690    2800 log.go:172] (0xc00058e000) Reply frame received for 1\nI0511 21:35:56.986715    2800 log.go:172] (0xc00058e000) (0xc000530000) Create stream\nI0511 21:35:56.986723    2800 log.go:172] (0xc00058e000) (0xc000530000) Stream added, broadcasting: 3\nI0511 21:35:56.987491    2800 log.go:172] (0xc00058e000) Reply frame received for 3\nI0511 21:35:56.987535    2800 log.go:172] (0xc00058e000) (0xc0008e4000) Create stream\nI0511 21:35:56.987550    2800 log.go:172] (0xc00058e000) (0xc0008e4000) Stream added, broadcasting: 5\nI0511 21:35:56.988292    2800 log.go:172] (0xc00058e000) Reply frame received for 5\nI0511 21:35:57.044623    2800 log.go:172] (0xc00058e000) Data frame received for 3\nI0511 21:35:57.044815    2800 log.go:172] (0xc000530000) (3) Data frame handling\nI0511 21:35:57.044908    2800 log.go:172] (0xc00058e000) Data frame received for 5\nI0511 21:35:57.044937    2800 log.go:172] (0xc0008e4000) (5) Data frame handling\nI0511 21:35:57.044969    2800 log.go:172] (0xc0008e4000) (5) Data frame sent\nI0511 21:35:57.044993    2800 log.go:172] (0xc00058e000) Data frame received for 5\nI0511 21:35:57.045010    2800 log.go:172] (0xc0008e4000) (5) Data frame handling\n+ nc -zv -t -w 2 10.101.176.245 80\nConnection to 10.101.176.245 80 port [tcp/http] succeeded!\nI0511 21:35:57.046147    2800 log.go:172] (0xc00058e000) Data frame received for 1\nI0511 21:35:57.046174    2800 log.go:172] (0xc00060b540) (1) Data frame handling\nI0511 21:35:57.046192    2800 log.go:172] (0xc00060b540) (1) Data frame sent\nI0511 21:35:57.046212    2800 log.go:172] (0xc00058e000) (0xc00060b540) Stream removed, broadcasting: 1\nI0511 21:35:57.046634    2800 log.go:172] (0xc00058e000) (0xc00060b540) Stream removed, broadcasting: 1\nI0511 21:35:57.046653    2800 log.go:172] (0xc00058e000) (0xc000530000) Stream removed, broadcasting: 3\nI0511 21:35:57.046811    2800 log.go:172] (0xc00058e000) (0xc0008e4000) Stream removed, broadcasting: 5\n"
May 11 21:35:57.051: INFO: stdout: ""
May 11 21:35:57.051: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-1505 execpod65dzn -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.15 30140'
May 11 21:35:57.489: INFO: stderr: "I0511 21:35:57.370263    2821 log.go:172] (0xc00070abb0) (0xc00077c460) Create stream\nI0511 21:35:57.370306    2821 log.go:172] (0xc00070abb0) (0xc00077c460) Stream added, broadcasting: 1\nI0511 21:35:57.372293    2821 log.go:172] (0xc00070abb0) Reply frame received for 1\nI0511 21:35:57.372334    2821 log.go:172] (0xc00070abb0) (0xc000539720) Create stream\nI0511 21:35:57.372360    2821 log.go:172] (0xc00070abb0) (0xc000539720) Stream added, broadcasting: 3\nI0511 21:35:57.373069    2821 log.go:172] (0xc00070abb0) Reply frame received for 3\nI0511 21:35:57.373106    2821 log.go:172] (0xc00070abb0) (0xc00064e320) Create stream\nI0511 21:35:57.373228    2821 log.go:172] (0xc00070abb0) (0xc00064e320) Stream added, broadcasting: 5\nI0511 21:35:57.373998    2821 log.go:172] (0xc00070abb0) Reply frame received for 5\nI0511 21:35:57.483204    2821 log.go:172] (0xc00070abb0) Data frame received for 3\nI0511 21:35:57.483230    2821 log.go:172] (0xc000539720) (3) Data frame handling\nI0511 21:35:57.483257    2821 log.go:172] (0xc00070abb0) Data frame received for 5\nI0511 21:35:57.483278    2821 log.go:172] (0xc00064e320) (5) Data frame handling\nI0511 21:35:57.483289    2821 log.go:172] (0xc00064e320) (5) Data frame sent\nI0511 21:35:57.483295    2821 log.go:172] (0xc00070abb0) Data frame received for 5\n+ nc -zv -t -w 2 172.17.0.15 30140\nConnection to 172.17.0.15 30140 port [tcp/30140] succeeded!\nI0511 21:35:57.483300    2821 log.go:172] (0xc00064e320) (5) Data frame handling\nI0511 21:35:57.484340    2821 log.go:172] (0xc00070abb0) Data frame received for 1\nI0511 21:35:57.484362    2821 log.go:172] (0xc00077c460) (1) Data frame handling\nI0511 21:35:57.484372    2821 log.go:172] (0xc00077c460) (1) Data frame sent\nI0511 21:35:57.484385    2821 log.go:172] (0xc00070abb0) (0xc00077c460) Stream removed, broadcasting: 1\nI0511 21:35:57.484397    2821 log.go:172] (0xc00070abb0) Go away received\nI0511 21:35:57.484865    2821 log.go:172] (0xc00070abb0) (0xc00077c460) Stream removed, broadcasting: 1\nI0511 21:35:57.484889    2821 log.go:172] (0xc00070abb0) (0xc000539720) Stream removed, broadcasting: 3\nI0511 21:35:57.484901    2821 log.go:172] (0xc00070abb0) (0xc00064e320) Stream removed, broadcasting: 5\n"
May 11 21:35:57.489: INFO: stdout: ""
May 11 21:35:57.489: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-1505 execpod65dzn -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 30140'
May 11 21:35:57.953: INFO: stderr: "I0511 21:35:57.882037    2842 log.go:172] (0xc0009f8000) (0xc0003d2c80) Create stream\nI0511 21:35:57.882087    2842 log.go:172] (0xc0009f8000) (0xc0003d2c80) Stream added, broadcasting: 1\nI0511 21:35:57.884317    2842 log.go:172] (0xc0009f8000) Reply frame received for 1\nI0511 21:35:57.884357    2842 log.go:172] (0xc0009f8000) (0xc000a34000) Create stream\nI0511 21:35:57.884370    2842 log.go:172] (0xc0009f8000) (0xc000a34000) Stream added, broadcasting: 3\nI0511 21:35:57.884943    2842 log.go:172] (0xc0009f8000) Reply frame received for 3\nI0511 21:35:57.884968    2842 log.go:172] (0xc0009f8000) (0xc0009b4000) Create stream\nI0511 21:35:57.884975    2842 log.go:172] (0xc0009f8000) (0xc0009b4000) Stream added, broadcasting: 5\nI0511 21:35:57.885794    2842 log.go:172] (0xc0009f8000) Reply frame received for 5\nI0511 21:35:57.948733    2842 log.go:172] (0xc0009f8000) Data frame received for 3\nI0511 21:35:57.948759    2842 log.go:172] (0xc000a34000) (3) Data frame handling\nI0511 21:35:57.948784    2842 log.go:172] (0xc0009f8000) Data frame received for 5\nI0511 21:35:57.948805    2842 log.go:172] (0xc0009b4000) (5) Data frame handling\nI0511 21:35:57.948819    2842 log.go:172] (0xc0009b4000) (5) Data frame sent\nI0511 21:35:57.948826    2842 log.go:172] (0xc0009f8000) Data frame received for 5\nI0511 21:35:57.948835    2842 log.go:172] (0xc0009b4000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.18 30140\nConnection to 172.17.0.18 30140 port [tcp/30140] succeeded!\nI0511 21:35:57.949703    2842 log.go:172] (0xc0009f8000) Data frame received for 1\nI0511 21:35:57.949720    2842 log.go:172] (0xc0003d2c80) (1) Data frame handling\nI0511 21:35:57.949736    2842 log.go:172] (0xc0003d2c80) (1) Data frame sent\nI0511 21:35:57.949747    2842 log.go:172] (0xc0009f8000) (0xc0003d2c80) Stream removed, broadcasting: 1\nI0511 21:35:57.949989    2842 log.go:172] (0xc0009f8000) (0xc0003d2c80) Stream removed, broadcasting: 1\nI0511 21:35:57.950011    2842 log.go:172] (0xc0009f8000) (0xc000a34000) Stream removed, broadcasting: 3\nI0511 21:35:57.950026    2842 log.go:172] (0xc0009f8000) Go away received\nI0511 21:35:57.950071    2842 log.go:172] (0xc0009f8000) (0xc0009b4000) Stream removed, broadcasting: 5\n"
May 11 21:35:57.953: INFO: stdout: ""
May 11 21:35:57.953: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:35:58.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1505" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:18.209 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":164,"skipped":2819,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:35:58.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:36:04.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6488" for this suite.

• [SLOW TEST:6.207 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:137
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":165,"skipped":2837,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:36:04.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
May 11 21:36:11.851: INFO: 9 pods remaining
May 11 21:36:11.851: INFO: 0 pods has nil DeletionTimestamp
May 11 21:36:11.851: INFO: 
May 11 21:36:13.468: INFO: 0 pods remaining
May 11 21:36:13.468: INFO: 0 pods has nil DeletionTimestamp
May 11 21:36:13.468: INFO: 
May 11 21:36:14.830: INFO: 0 pods remaining
May 11 21:36:14.830: INFO: 0 pods has nil DeletionTimestamp
May 11 21:36:14.830: INFO: 
STEP: Gathering metrics
W0511 21:36:15.584211       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May 11 21:36:15.584: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:36:15.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2950" for this suite.

• [SLOW TEST:11.327 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":166,"skipped":2884,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:36:15.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3805.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3805.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3805.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3805.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3805.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3805.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May 11 21:36:32.264: INFO: DNS probes using dns-3805/dns-test-8c99135d-cef9-4de9-96b3-f5e7aedd3450 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:36:32.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3805" for this suite.

• [SLOW TEST:17.276 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":167,"skipped":2896,"failed":0}
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:36:32.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:36:48.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1084" for this suite.

• [SLOW TEST:16.186 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":168,"skipped":2896,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:36:49.106: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:37:01.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6048" for this suite.

• [SLOW TEST:12.116 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":169,"skipped":2914,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:37:01.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-f56b3f4f-f668-44e5-a376-54b589248b1f in namespace container-probe-4503
May 11 21:37:07.666: INFO: Started pod busybox-f56b3f4f-f668-44e5-a376-54b589248b1f in namespace container-probe-4503
STEP: checking the pod's current state and verifying that restartCount is present
May 11 21:37:07.668: INFO: Initial restart count of pod busybox-f56b3f4f-f668-44e5-a376-54b589248b1f is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:41:08.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4503" for this suite.

• [SLOW TEST:247.803 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":170,"skipped":2930,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:41:09.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
STEP: reading a file in the container
May 11 21:41:15.903: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3557 pod-service-account-311ba663-66ff-4144-abb4-bb7d300cf281 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
May 11 21:41:27.038: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3557 pod-service-account-311ba663-66ff-4144-abb4-bb7d300cf281 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
May 11 21:41:27.238: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3557 pod-service-account-311ba663-66ff-4144-abb4-bb7d300cf281 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:41:27.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3557" for this suite.

• [SLOW TEST:18.464 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":275,"completed":171,"skipped":2941,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:41:27.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override all
May 11 21:41:27.774: INFO: Waiting up to 5m0s for pod "client-containers-27254737-b462-4602-a50c-08361c15c09b" in namespace "containers-9099" to be "Succeeded or Failed"
May 11 21:41:27.897: INFO: Pod "client-containers-27254737-b462-4602-a50c-08361c15c09b": Phase="Pending", Reason="", readiness=false. Elapsed: 123.215846ms
May 11 21:41:29.901: INFO: Pod "client-containers-27254737-b462-4602-a50c-08361c15c09b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127074951s
May 11 21:41:31.915: INFO: Pod "client-containers-27254737-b462-4602-a50c-08361c15c09b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141100931s
May 11 21:41:34.070: INFO: Pod "client-containers-27254737-b462-4602-a50c-08361c15c09b": Phase="Running", Reason="", readiness=true. Elapsed: 6.296024328s
May 11 21:41:36.073: INFO: Pod "client-containers-27254737-b462-4602-a50c-08361c15c09b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.29930582s
STEP: Saw pod success
May 11 21:41:36.073: INFO: Pod "client-containers-27254737-b462-4602-a50c-08361c15c09b" satisfied condition "Succeeded or Failed"
May 11 21:41:36.075: INFO: Trying to get logs from node kali-worker pod client-containers-27254737-b462-4602-a50c-08361c15c09b container test-container: 
STEP: delete the pod
May 11 21:41:36.493: INFO: Waiting for pod client-containers-27254737-b462-4602-a50c-08361c15c09b to disappear
May 11 21:41:36.681: INFO: Pod client-containers-27254737-b462-4602-a50c-08361c15c09b no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:41:36.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9099" for this suite.

• [SLOW TEST:9.243 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":172,"skipped":2956,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:41:36.734: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service nodeport-service with the type=NodePort in namespace services-2793
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-2793
STEP: creating replication controller externalsvc in namespace services-2793
I0511 21:41:37.575749       7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-2793, replica count: 2
I0511 21:41:40.626156       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0511 21:41:43.626412       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0511 21:41:46.626697       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
May 11 21:41:46.760: INFO: Creating new exec pod
May 11 21:41:54.952: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-2793 execpodqssmz -- /bin/sh -x -c nslookup nodeport-service'
May 11 21:41:55.253: INFO: stderr: "I0511 21:41:55.176002    2948 log.go:172] (0xc00003a790) (0xc0007dd540) Create stream\nI0511 21:41:55.176087    2948 log.go:172] (0xc00003a790) (0xc0007dd540) Stream added, broadcasting: 1\nI0511 21:41:55.179212    2948 log.go:172] (0xc00003a790) Reply frame received for 1\nI0511 21:41:55.179250    2948 log.go:172] (0xc00003a790) (0xc000a32000) Create stream\nI0511 21:41:55.179264    2948 log.go:172] (0xc00003a790) (0xc000a32000) Stream added, broadcasting: 3\nI0511 21:41:55.180082    2948 log.go:172] (0xc00003a790) Reply frame received for 3\nI0511 21:41:55.180110    2948 log.go:172] (0xc00003a790) (0xc0007dd5e0) Create stream\nI0511 21:41:55.180121    2948 log.go:172] (0xc00003a790) (0xc0007dd5e0) Stream added, broadcasting: 5\nI0511 21:41:55.180879    2948 log.go:172] (0xc00003a790) Reply frame received for 5\nI0511 21:41:55.237596    2948 log.go:172] (0xc00003a790) Data frame received for 5\nI0511 21:41:55.237684    2948 log.go:172] (0xc0007dd5e0) (5) Data frame handling\nI0511 21:41:55.237726    2948 log.go:172] (0xc0007dd5e0) (5) Data frame sent\n+ nslookup nodeport-service\nI0511 21:41:55.243864    2948 log.go:172] (0xc00003a790) Data frame received for 3\nI0511 21:41:55.243879    2948 log.go:172] (0xc000a32000) (3) Data frame handling\nI0511 21:41:55.243890    2948 log.go:172] (0xc000a32000) (3) Data frame sent\nI0511 21:41:55.245580    2948 log.go:172] (0xc00003a790) Data frame received for 5\nI0511 21:41:55.245594    2948 log.go:172] (0xc0007dd5e0) (5) Data frame handling\nI0511 21:41:55.245685    2948 log.go:172] (0xc00003a790) Data frame received for 3\nI0511 21:41:55.245709    2948 log.go:172] (0xc000a32000) (3) Data frame handling\nI0511 21:41:55.245720    2948 log.go:172] (0xc000a32000) (3) Data frame sent\nI0511 21:41:55.245731    2948 log.go:172] (0xc00003a790) Data frame received for 3\nI0511 21:41:55.245741    2948 log.go:172] (0xc000a32000) (3) Data frame handling\nI0511 21:41:55.247027    2948 log.go:172] (0xc00003a790) Data frame received for 1\nI0511 21:41:55.247045    2948 log.go:172] (0xc0007dd540) (1) Data frame handling\nI0511 21:41:55.247068    2948 log.go:172] (0xc0007dd540) (1) Data frame sent\nI0511 21:41:55.247090    2948 log.go:172] (0xc00003a790) (0xc0007dd540) Stream removed, broadcasting: 1\nI0511 21:41:55.247110    2948 log.go:172] (0xc00003a790) Go away received\nI0511 21:41:55.247410    2948 log.go:172] (0xc00003a790) (0xc0007dd540) Stream removed, broadcasting: 1\nI0511 21:41:55.247426    2948 log.go:172] (0xc00003a790) (0xc000a32000) Stream removed, broadcasting: 3\nI0511 21:41:55.247436    2948 log.go:172] (0xc00003a790) (0xc0007dd5e0) Stream removed, broadcasting: 5\n"
May 11 21:41:55.253: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-2793.svc.cluster.local\tcanonical name = externalsvc.services-2793.svc.cluster.local.\nName:\texternalsvc.services-2793.svc.cluster.local\nAddress: 10.100.151.121\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-2793, will wait for the garbage collector to delete the pods
May 11 21:41:55.312: INFO: Deleting ReplicationController externalsvc took: 6.324062ms
May 11 21:41:55.712: INFO: Terminating ReplicationController externalsvc pods took: 400.299387ms
May 11 21:42:04.407: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:42:04.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2793" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:28.938 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":173,"skipped":3012,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:42:05.673: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 11 21:42:06.570: INFO: Waiting up to 5m0s for pod "downwardapi-volume-85ccfe0d-c37c-4d64-be33-d49bf1431346" in namespace "projected-4662" to be "Succeeded or Failed"
May 11 21:42:06.650: INFO: Pod "downwardapi-volume-85ccfe0d-c37c-4d64-be33-d49bf1431346": Phase="Pending", Reason="", readiness=false. Elapsed: 79.410645ms
May 11 21:42:08.704: INFO: Pod "downwardapi-volume-85ccfe0d-c37c-4d64-be33-d49bf1431346": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13364305s
May 11 21:42:10.715: INFO: Pod "downwardapi-volume-85ccfe0d-c37c-4d64-be33-d49bf1431346": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144313525s
May 11 21:42:12.855: INFO: Pod "downwardapi-volume-85ccfe0d-c37c-4d64-be33-d49bf1431346": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.28509524s
STEP: Saw pod success
May 11 21:42:12.855: INFO: Pod "downwardapi-volume-85ccfe0d-c37c-4d64-be33-d49bf1431346" satisfied condition "Succeeded or Failed"
May 11 21:42:12.858: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-85ccfe0d-c37c-4d64-be33-d49bf1431346 container client-container: 
STEP: delete the pod
May 11 21:42:13.127: INFO: Waiting for pod downwardapi-volume-85ccfe0d-c37c-4d64-be33-d49bf1431346 to disappear
May 11 21:42:13.132: INFO: Pod downwardapi-volume-85ccfe0d-c37c-4d64-be33-d49bf1431346 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:42:13.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4662" for this suite.

• [SLOW TEST:7.467 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":174,"skipped":3040,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:42:13.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap that has name configmap-test-emptyKey-8b60f13c-0e30-40a0-b91d-b8e697fabde6
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:42:13.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8959" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":175,"skipped":3064,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:42:13.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
May 11 21:42:14.706: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-7055 /api/v1/namespaces/watch-7055/configmaps/e2e-watch-test-resource-version 1a0dc67b-14f0-48f9-af3c-3adc6ad3d139 3526124 0 2020-05-11 21:42:14 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-05-11 21:42:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May 11 21:42:14.707: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-7055 /api/v1/namespaces/watch-7055/configmaps/e2e-watch-test-resource-version 1a0dc67b-14f0-48f9-af3c-3adc6ad3d139 3526125 0 2020-05-11 21:42:14 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-05-11 21:42:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:42:14.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7055" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":176,"skipped":3079,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:42:14.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5206
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-5206
STEP: creating replication controller externalsvc in namespace services-5206
I0511 21:42:15.970458       7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5206, replica count: 2
I0511 21:42:19.020859       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0511 21:42:22.021061       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0511 21:42:25.021425       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
May 11 21:42:25.877: INFO: Creating new exec pod
May 11 21:42:32.098: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-5206 execpodpjrcr -- /bin/sh -x -c nslookup clusterip-service'
May 11 21:42:32.321: INFO: stderr: "I0511 21:42:32.216673    2968 log.go:172] (0xc00095f290) (0xc0009f88c0) Create stream\nI0511 21:42:32.216730    2968 log.go:172] (0xc00095f290) (0xc0009f88c0) Stream added, broadcasting: 1\nI0511 21:42:32.222339    2968 log.go:172] (0xc00095f290) Reply frame received for 1\nI0511 21:42:32.222371    2968 log.go:172] (0xc00095f290) (0xc00067f720) Create stream\nI0511 21:42:32.222378    2968 log.go:172] (0xc00095f290) (0xc00067f720) Stream added, broadcasting: 3\nI0511 21:42:32.223215    2968 log.go:172] (0xc00095f290) Reply frame received for 3\nI0511 21:42:32.223241    2968 log.go:172] (0xc00095f290) (0xc000512b40) Create stream\nI0511 21:42:32.223256    2968 log.go:172] (0xc00095f290) (0xc000512b40) Stream added, broadcasting: 5\nI0511 21:42:32.224042    2968 log.go:172] (0xc00095f290) Reply frame received for 5\nI0511 21:42:32.278721    2968 log.go:172] (0xc00095f290) Data frame received for 5\nI0511 21:42:32.278755    2968 log.go:172] (0xc000512b40) (5) Data frame handling\nI0511 21:42:32.278775    2968 log.go:172] (0xc000512b40) (5) Data frame sent\n+ nslookup clusterip-service\nI0511 21:42:32.312947    2968 log.go:172] (0xc00095f290) Data frame received for 3\nI0511 21:42:32.312985    2968 log.go:172] (0xc00067f720) (3) Data frame handling\nI0511 21:42:32.313011    2968 log.go:172] (0xc00067f720) (3) Data frame sent\nI0511 21:42:32.313980    2968 log.go:172] (0xc00095f290) Data frame received for 3\nI0511 21:42:32.314000    2968 log.go:172] (0xc00067f720) (3) Data frame handling\nI0511 21:42:32.314011    2968 log.go:172] (0xc00067f720) (3) Data frame sent\nI0511 21:42:32.314703    2968 log.go:172] (0xc00095f290) Data frame received for 5\nI0511 21:42:32.314715    2968 log.go:172] (0xc000512b40) (5) Data frame handling\nI0511 21:42:32.314755    2968 log.go:172] (0xc00095f290) Data frame received for 3\nI0511 21:42:32.314783    2968 log.go:172] (0xc00067f720) (3) Data frame handling\nI0511 21:42:32.316457    2968 log.go:172] (0xc00095f290) Data frame received for 1\nI0511 21:42:32.316492    2968 log.go:172] (0xc0009f88c0) (1) Data frame handling\nI0511 21:42:32.316510    2968 log.go:172] (0xc0009f88c0) (1) Data frame sent\nI0511 21:42:32.316553    2968 log.go:172] (0xc00095f290) (0xc0009f88c0) Stream removed, broadcasting: 1\nI0511 21:42:32.316603    2968 log.go:172] (0xc00095f290) Go away received\nI0511 21:42:32.316974    2968 log.go:172] (0xc00095f290) (0xc0009f88c0) Stream removed, broadcasting: 1\nI0511 21:42:32.317008    2968 log.go:172] (0xc00095f290) (0xc00067f720) Stream removed, broadcasting: 3\nI0511 21:42:32.317021    2968 log.go:172] (0xc00095f290) (0xc000512b40) Stream removed, broadcasting: 5\n"
May 11 21:42:32.321: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-5206.svc.cluster.local\tcanonical name = externalsvc.services-5206.svc.cluster.local.\nName:\texternalsvc.services-5206.svc.cluster.local\nAddress: 10.97.65.26\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-5206, will wait for the garbage collector to delete the pods
May 11 21:42:32.660: INFO: Deleting ReplicationController externalsvc took: 265.292169ms
May 11 21:42:33.060: INFO: Terminating ReplicationController externalsvc pods took: 400.21308ms
May 11 21:42:44.141: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:42:44.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5206" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:29.268 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":177,"skipped":3113,"failed":0}
SSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:42:44.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating secret secrets-9569/secret-test-5445e7f0-0683-4bbf-9670-b962a5f9e9d8
STEP: Creating a pod to test consume secrets
May 11 21:42:44.315: INFO: Waiting up to 5m0s for pod "pod-configmaps-af1c7134-a6f7-4bad-a448-cb2e73890d4e" in namespace "secrets-9569" to be "Succeeded or Failed"
May 11 21:42:44.338: INFO: Pod "pod-configmaps-af1c7134-a6f7-4bad-a448-cb2e73890d4e": Phase="Pending", Reason="", readiness=false. Elapsed: 22.413213ms
May 11 21:42:46.443: INFO: Pod "pod-configmaps-af1c7134-a6f7-4bad-a448-cb2e73890d4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127374934s
May 11 21:42:48.541: INFO: Pod "pod-configmaps-af1c7134-a6f7-4bad-a448-cb2e73890d4e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.225462263s
May 11 21:42:50.545: INFO: Pod "pod-configmaps-af1c7134-a6f7-4bad-a448-cb2e73890d4e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.229481627s
May 11 21:42:52.556: INFO: Pod "pod-configmaps-af1c7134-a6f7-4bad-a448-cb2e73890d4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.24003897s
STEP: Saw pod success
May 11 21:42:52.556: INFO: Pod "pod-configmaps-af1c7134-a6f7-4bad-a448-cb2e73890d4e" satisfied condition "Succeeded or Failed"
May 11 21:42:52.558: INFO: Trying to get logs from node kali-worker pod pod-configmaps-af1c7134-a6f7-4bad-a448-cb2e73890d4e container env-test: 
STEP: delete the pod
May 11 21:42:52.881: INFO: Waiting for pod pod-configmaps-af1c7134-a6f7-4bad-a448-cb2e73890d4e to disappear
May 11 21:42:53.047: INFO: Pod pod-configmaps-af1c7134-a6f7-4bad-a448-cb2e73890d4e no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:42:53.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9569" for this suite.

• [SLOW TEST:8.807 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":178,"skipped":3116,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:42:53.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 11 21:42:53.969: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d547fa51-3ad5-48f4-aa41-e9e2d989f97d" in namespace "projected-9124" to be "Succeeded or Failed"
May 11 21:42:53.971: INFO: Pod "downwardapi-volume-d547fa51-3ad5-48f4-aa41-e9e2d989f97d": Phase="Pending", Reason="", readiness=false. Elapsed: 1.908483ms
May 11 21:42:56.095: INFO: Pod "downwardapi-volume-d547fa51-3ad5-48f4-aa41-e9e2d989f97d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125256298s
May 11 21:42:58.485: INFO: Pod "downwardapi-volume-d547fa51-3ad5-48f4-aa41-e9e2d989f97d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.515503335s
May 11 21:43:00.503: INFO: Pod "downwardapi-volume-d547fa51-3ad5-48f4-aa41-e9e2d989f97d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.53316979s
May 11 21:43:02.592: INFO: Pod "downwardapi-volume-d547fa51-3ad5-48f4-aa41-e9e2d989f97d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.622634663s
May 11 21:43:04.596: INFO: Pod "downwardapi-volume-d547fa51-3ad5-48f4-aa41-e9e2d989f97d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.626460562s
STEP: Saw pod success
May 11 21:43:04.596: INFO: Pod "downwardapi-volume-d547fa51-3ad5-48f4-aa41-e9e2d989f97d" satisfied condition "Succeeded or Failed"
May 11 21:43:04.599: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-d547fa51-3ad5-48f4-aa41-e9e2d989f97d container client-container: 
STEP: delete the pod
May 11 21:43:04.683: INFO: Waiting for pod downwardapi-volume-d547fa51-3ad5-48f4-aa41-e9e2d989f97d to disappear
May 11 21:43:04.692: INFO: Pod downwardapi-volume-d547fa51-3ad5-48f4-aa41-e9e2d989f97d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:43:04.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9124" for this suite.

• [SLOW TEST:11.644 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":179,"skipped":3129,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:43:04.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-configmap-7mrm
STEP: Creating a pod to test atomic-volume-subpath
May 11 21:43:05.366: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7mrm" in namespace "subpath-1373" to be "Succeeded or Failed"
May 11 21:43:05.401: INFO: Pod "pod-subpath-test-configmap-7mrm": Phase="Pending", Reason="", readiness=false. Elapsed: 35.243687ms
May 11 21:43:07.405: INFO: Pod "pod-subpath-test-configmap-7mrm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038992629s
May 11 21:43:09.628: INFO: Pod "pod-subpath-test-configmap-7mrm": Phase="Running", Reason="", readiness=true. Elapsed: 4.261821887s
May 11 21:43:11.675: INFO: Pod "pod-subpath-test-configmap-7mrm": Phase="Running", Reason="", readiness=true. Elapsed: 6.308921707s
May 11 21:43:13.679: INFO: Pod "pod-subpath-test-configmap-7mrm": Phase="Running", Reason="", readiness=true. Elapsed: 8.3127482s
May 11 21:43:15.686: INFO: Pod "pod-subpath-test-configmap-7mrm": Phase="Running", Reason="", readiness=true. Elapsed: 10.31943581s
May 11 21:43:17.782: INFO: Pod "pod-subpath-test-configmap-7mrm": Phase="Running", Reason="", readiness=true. Elapsed: 12.416271026s
May 11 21:43:19.786: INFO: Pod "pod-subpath-test-configmap-7mrm": Phase="Running", Reason="", readiness=true. Elapsed: 14.420009712s
May 11 21:43:21.789: INFO: Pod "pod-subpath-test-configmap-7mrm": Phase="Running", Reason="", readiness=true. Elapsed: 16.422849884s
May 11 21:43:23.793: INFO: Pod "pod-subpath-test-configmap-7mrm": Phase="Running", Reason="", readiness=true. Elapsed: 18.427057133s
May 11 21:43:25.797: INFO: Pod "pod-subpath-test-configmap-7mrm": Phase="Running", Reason="", readiness=true. Elapsed: 20.430800104s
May 11 21:43:27.934: INFO: Pod "pod-subpath-test-configmap-7mrm": Phase="Running", Reason="", readiness=true. Elapsed: 22.567496133s
May 11 21:43:29.937: INFO: Pod "pod-subpath-test-configmap-7mrm": Phase="Running", Reason="", readiness=true. Elapsed: 24.5714078s
May 11 21:43:31.997: INFO: Pod "pod-subpath-test-configmap-7mrm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.630954132s
STEP: Saw pod success
May 11 21:43:31.997: INFO: Pod "pod-subpath-test-configmap-7mrm" satisfied condition "Succeeded or Failed"
May 11 21:43:32.000: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-configmap-7mrm container test-container-subpath-configmap-7mrm: 
STEP: delete the pod
May 11 21:43:32.530: INFO: Waiting for pod pod-subpath-test-configmap-7mrm to disappear
May 11 21:43:32.700: INFO: Pod pod-subpath-test-configmap-7mrm no longer exists
STEP: Deleting pod pod-subpath-test-configmap-7mrm
May 11 21:43:32.700: INFO: Deleting pod "pod-subpath-test-configmap-7mrm" in namespace "subpath-1373"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:43:32.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1373" for this suite.

• [SLOW TEST:28.523 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":180,"skipped":3148,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:43:33.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:43:34.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-9091" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":181,"skipped":3174,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:43:34.028: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 21:43:34.142: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:43:35.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5970" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":275,"completed":182,"skipped":3178,"failed":0}

------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:43:35.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-3c629d82-9ffb-43b9-8856-bc19de5f5d61
STEP: Creating a pod to test consume secrets
May 11 21:43:35.252: INFO: Waiting up to 5m0s for pod "pod-secrets-d1aed4b7-9d39-4b57-b628-7169c213dde4" in namespace "secrets-4508" to be "Succeeded or Failed"
May 11 21:43:35.317: INFO: Pod "pod-secrets-d1aed4b7-9d39-4b57-b628-7169c213dde4": Phase="Pending", Reason="", readiness=false. Elapsed: 65.12631ms
May 11 21:43:37.358: INFO: Pod "pod-secrets-d1aed4b7-9d39-4b57-b628-7169c213dde4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105507441s
May 11 21:43:39.455: INFO: Pod "pod-secrets-d1aed4b7-9d39-4b57-b628-7169c213dde4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.202874061s
May 11 21:43:41.570: INFO: Pod "pod-secrets-d1aed4b7-9d39-4b57-b628-7169c213dde4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.317968787s
STEP: Saw pod success
May 11 21:43:41.570: INFO: Pod "pod-secrets-d1aed4b7-9d39-4b57-b628-7169c213dde4" satisfied condition "Succeeded or Failed"
May 11 21:43:41.574: INFO: Trying to get logs from node kali-worker pod pod-secrets-d1aed4b7-9d39-4b57-b628-7169c213dde4 container secret-volume-test: 
STEP: delete the pod
May 11 21:43:41.630: INFO: Waiting for pod pod-secrets-d1aed4b7-9d39-4b57-b628-7169c213dde4 to disappear
May 11 21:43:41.754: INFO: Pod pod-secrets-d1aed4b7-9d39-4b57-b628-7169c213dde4 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:43:41.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4508" for this suite.

• [SLOW TEST:6.587 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":183,"skipped":3178,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:43:41.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 11 21:43:44.122: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 11 21:43:46.312: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830224, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830224, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830224, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830223, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:43:48.545: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830224, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830224, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830224, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830223, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:43:50.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830224, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830224, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830224, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830223, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:43:52.545: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830224, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830224, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830224, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830223, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 11 21:43:56.463: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:43:58.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3287" for this suite.
STEP: Destroying namespace "webhook-3287-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:17.496 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":184,"skipped":3187,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:43:59.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 21:44:00.257: INFO: Pod name rollover-pod: Found 0 pods out of 1
May 11 21:44:05.403: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
May 11 21:44:08.431: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
May 11 21:44:10.434: INFO: Creating deployment "test-rollover-deployment"
May 11 21:44:10.474: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
May 11 21:44:12.702: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
May 11 21:44:13.325: INFO: Ensure that both replica sets have 1 created replica
May 11 21:44:13.333: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
May 11 21:44:14.366: INFO: Updating deployment test-rollover-deployment
May 11 21:44:14.366: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
May 11 21:44:17.755: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
May 11 21:44:19.797: INFO: Make sure deployment "test-rollover-deployment" is complete
May 11 21:44:20.506: INFO: all replica sets need to contain the pod-template-hash label
May 11 21:44:20.506: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830259, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:44:23.580: INFO: all replica sets need to contain the pod-template-hash label
May 11 21:44:23.580: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830259, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:44:24.544: INFO: all replica sets need to contain the pod-template-hash label
May 11 21:44:24.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830259, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:44:26.512: INFO: all replica sets need to contain the pod-template-hash label
May 11 21:44:26.512: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830259, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:44:29.894: INFO: all replica sets need to contain the pod-template-hash label
May 11 21:44:29.894: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830259, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:44:31.313: INFO: all replica sets need to contain the pod-template-hash label
May 11 21:44:31.313: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830259, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:44:33.945: INFO: all replica sets need to contain the pod-template-hash label
May 11 21:44:33.945: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830259, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:44:35.085: INFO: all replica sets need to contain the pod-template-hash label
May 11 21:44:35.085: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830272, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:44:36.573: INFO: all replica sets need to contain the pod-template-hash label
May 11 21:44:36.573: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830272, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:44:38.682: INFO: all replica sets need to contain the pod-template-hash label
May 11 21:44:38.682: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830272, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:44:40.511: INFO: all replica sets need to contain the pod-template-hash label
May 11 21:44:40.512: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830272, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:44:42.700: INFO: all replica sets need to contain the pod-template-hash label
May 11 21:44:42.700: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830272, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:44:45.361: INFO: 
May 11 21:44:45.361: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830283, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830250, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:44:46.514: INFO: 
May 11 21:44:46.514: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
May 11 21:44:46.522: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-6908 /apis/apps/v1/namespaces/deployment-6908/deployments/test-rollover-deployment 07b92133-d972-427a-85a9-a05cd6529f71 3526879 2 2020-05-11 21:44:10 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-05-11 21:44:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-11 21:44:45 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003f75cd8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-11 21:44:10 +0000 UTC,LastTransitionTime:2020-05-11 21:44:10 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-84f7f6f64b" has successfully progressed.,LastUpdateTime:2020-05-11 21:44:45 +0000 UTC,LastTransitionTime:2020-05-11 21:44:10 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

May 11 21:44:46.525: INFO: New ReplicaSet "test-rollover-deployment-84f7f6f64b" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-84f7f6f64b  deployment-6908 /apis/apps/v1/namespaces/deployment-6908/replicasets/test-rollover-deployment-84f7f6f64b 5ed55774-3aa9-40b3-bef3-86eb3426c219 3526865 2 2020-05-11 21:44:14 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 07b92133-d972-427a-85a9-a05cd6529f71 0xc004686317 0xc004686318}] []  [{kube-controller-manager Update apps/v1 2020-05-11 21:44:42 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 55 98 57 50 49 51 51 45 100 57 55 50 45 52 50 55 97 45 56 53 97 57 45 97 48 53 99 100 54 53 50 57 102 55 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 84f7f6f64b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0046863a8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
May 11 21:44:46.525: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
May 11 21:44:46.525: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-6908 /apis/apps/v1/namespaces/deployment-6908/replicasets/test-rollover-controller 0061edbe-68ff-4a8c-87d1-9a06bae92753 3526877 2 2020-05-11 21:44:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 07b92133-d972-427a-85a9-a05cd6529f71 0xc004686107 0xc004686108}] []  [{e2e.test Update apps/v1 2020-05-11 21:44:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-11 21:44:45 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 55 98 57 50 49 51 51 45 100 57 55 50 45 52 50 55 97 45 56 53 97 57 45 97 48 53 99 100 54 53 50 57 102 55 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0046861a8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May 11 21:44:46.525: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5  deployment-6908 /apis/apps/v1/namespaces/deployment-6908/replicasets/test-rollover-deployment-5686c4cfd5 967bc890-59d8-4778-9aa1-d20d352518b1 3526791 2 2020-05-11 21:44:10 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 07b92133-d972-427a-85a9-a05cd6529f71 0xc004686217 0xc004686218}] []  [{kube-controller-manager Update apps/v1 2020-05-11 21:44:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 55 98 57 50 49 51 51 45 100 57 55 50 45 52 50 55 97 45 56 53 97 57 45 97 48 53 99 100 54 53 50 57 102 55 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 114 101 100 105 115 45 115 108 97 118 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0046862a8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May 11 21:44:46.528: INFO: Pod "test-rollover-deployment-84f7f6f64b-2jqbf" is available:
&Pod{ObjectMeta:{test-rollover-deployment-84f7f6f64b-2jqbf test-rollover-deployment-84f7f6f64b- deployment-6908 /api/v1/namespaces/deployment-6908/pods/test-rollover-deployment-84f7f6f64b-2jqbf 2a6e09f2-4d7e-4519-8fca-317931a66ee0 3526838 0 2020-05-11 21:44:15 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [{apps/v1 ReplicaSet test-rollover-deployment-84f7f6f64b 5ed55774-3aa9-40b3-bef3-86eb3426c219 0xc004686947 0xc004686948}] []  [{kube-controller-manager Update v1 2020-05-11 21:44:15 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 101 100 53 53 55 55 52 45 51 97 97 57 45 52 48 98 51 45 98 101 102 51 45 56 54 101 98 51 52 50 54 99 50 49 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-11 21:44:32 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 55 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jzbhz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jzbhz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jzbhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 21:44:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 21:44:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 21:44:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 21:44:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.173,StartTime:2020-05-11 21:44:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 21:44:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://1853b5c66f8d74eaa00ff556442396bccac2a3139c9ce599aa22c9b55e1f46c5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.173,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:44:46.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6908" for this suite.

• [SLOW TEST:47.278 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":185,"skipped":3205,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:44:46.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-3bff7f41-e919-4cb1-8dbb-6e7ea9a25105
STEP: Creating a pod to test consume secrets
May 11 21:44:46.976: INFO: Waiting up to 5m0s for pod "pod-secrets-a4dbaa1d-6157-454b-9f46-c99646cd417a" in namespace "secrets-5341" to be "Succeeded or Failed"
May 11 21:44:46.989: INFO: Pod "pod-secrets-a4dbaa1d-6157-454b-9f46-c99646cd417a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.857936ms
May 11 21:44:49.013: INFO: Pod "pod-secrets-a4dbaa1d-6157-454b-9f46-c99646cd417a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037306583s
May 11 21:44:51.414: INFO: Pod "pod-secrets-a4dbaa1d-6157-454b-9f46-c99646cd417a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.437445873s
May 11 21:44:53.485: INFO: Pod "pod-secrets-a4dbaa1d-6157-454b-9f46-c99646cd417a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.508998688s
STEP: Saw pod success
May 11 21:44:53.485: INFO: Pod "pod-secrets-a4dbaa1d-6157-454b-9f46-c99646cd417a" satisfied condition "Succeeded or Failed"
May 11 21:44:53.498: INFO: Trying to get logs from node kali-worker pod pod-secrets-a4dbaa1d-6157-454b-9f46-c99646cd417a container secret-volume-test: 
STEP: delete the pod
May 11 21:44:53.687: INFO: Waiting for pod pod-secrets-a4dbaa1d-6157-454b-9f46-c99646cd417a to disappear
May 11 21:44:53.744: INFO: Pod pod-secrets-a4dbaa1d-6157-454b-9f46-c99646cd417a no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:44:53.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5341" for this suite.

• [SLOW TEST:7.219 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":186,"skipped":3213,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:44:53.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 21:44:54.749: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:44:55.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5617" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":275,"completed":187,"skipped":3215,"failed":0}
SSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:44:56.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:45:27.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-6373" for this suite.

• [SLOW TEST:30.956 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":188,"skipped":3218,"failed":0}
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:45:27.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 21:45:27.802: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config version'
May 11 21:45:28.390: INFO: stderr: ""
May 11 21:45:28.390: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:24:20Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:45:28.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4582" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":275,"completed":189,"skipped":3218,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:45:28.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 11 21:45:29.015: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2311355a-a767-48cc-906a-dd4106b86f46" in namespace "downward-api-6393" to be "Succeeded or Failed"
May 11 21:45:29.300: INFO: Pod "downwardapi-volume-2311355a-a767-48cc-906a-dd4106b86f46": Phase="Pending", Reason="", readiness=false. Elapsed: 284.414948ms
May 11 21:45:31.310: INFO: Pod "downwardapi-volume-2311355a-a767-48cc-906a-dd4106b86f46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.294780959s
May 11 21:45:33.467: INFO: Pod "downwardapi-volume-2311355a-a767-48cc-906a-dd4106b86f46": Phase="Pending", Reason="", readiness=false. Elapsed: 4.451230205s
May 11 21:45:35.588: INFO: Pod "downwardapi-volume-2311355a-a767-48cc-906a-dd4106b86f46": Phase="Running", Reason="", readiness=true. Elapsed: 6.573131301s
May 11 21:45:37.592: INFO: Pod "downwardapi-volume-2311355a-a767-48cc-906a-dd4106b86f46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.576471321s
STEP: Saw pod success
May 11 21:45:37.592: INFO: Pod "downwardapi-volume-2311355a-a767-48cc-906a-dd4106b86f46" satisfied condition "Succeeded or Failed"
May 11 21:45:37.595: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-2311355a-a767-48cc-906a-dd4106b86f46 container client-container: 
STEP: delete the pod
May 11 21:45:37.704: INFO: Waiting for pod downwardapi-volume-2311355a-a767-48cc-906a-dd4106b86f46 to disappear
May 11 21:45:37.761: INFO: Pod downwardapi-volume-2311355a-a767-48cc-906a-dd4106b86f46 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:45:37.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6393" for this suite.

• [SLOW TEST:9.358 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":190,"skipped":3227,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:45:37.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 21:45:37.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
May 11 21:45:40.743: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2658 create -f -'
May 11 21:45:53.584: INFO: stderr: ""
May 11 21:45:53.584: INFO: stdout: "e2e-test-crd-publish-openapi-3154-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
May 11 21:45:53.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2658 delete e2e-test-crd-publish-openapi-3154-crds test-foo'
May 11 21:45:53.744: INFO: stderr: ""
May 11 21:45:53.744: INFO: stdout: "e2e-test-crd-publish-openapi-3154-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
May 11 21:45:53.744: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2658 apply -f -'
May 11 21:45:54.362: INFO: stderr: ""
May 11 21:45:54.362: INFO: stdout: "e2e-test-crd-publish-openapi-3154-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
May 11 21:45:54.362: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2658 delete e2e-test-crd-publish-openapi-3154-crds test-foo'
May 11 21:45:54.584: INFO: stderr: ""
May 11 21:45:54.584: INFO: stdout: "e2e-test-crd-publish-openapi-3154-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
May 11 21:45:54.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2658 create -f -'
May 11 21:45:54.896: INFO: rc: 1
May 11 21:45:54.896: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2658 apply -f -'
May 11 21:45:55.140: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
May 11 21:45:55.140: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2658 create -f -'
May 11 21:45:55.374: INFO: rc: 1
May 11 21:45:55.374: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2658 apply -f -'
May 11 21:45:55.600: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
May 11 21:45:55.600: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3154-crds'
May 11 21:45:55.827: INFO: stderr: ""
May 11 21:45:55.827: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3154-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
May 11 21:45:55.828: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3154-crds.metadata'
May 11 21:45:56.115: INFO: stderr: ""
May 11 21:45:56.115: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3154-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
May 11 21:45:56.115: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3154-crds.spec'
May 11 21:45:56.410: INFO: stderr: ""
May 11 21:45:56.410: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3154-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
May 11 21:45:56.410: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3154-crds.spec.bars'
May 11 21:45:56.814: INFO: stderr: ""
May 11 21:45:56.814: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3154-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
May 11 21:45:56.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3154-crds.spec.bars2'
May 11 21:45:57.102: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:45:59.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2658" for this suite.

• [SLOW TEST:21.267 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":191,"skipped":3249,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:45:59.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-7812
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-7812
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7812
May 11 21:45:59.219: INFO: Found 0 stateful pods, waiting for 1
May 11 21:46:09.232: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
May 11 21:46:09.234: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7812 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May 11 21:46:09.480: INFO: stderr: "I0511 21:46:09.358372    3287 log.go:172] (0xc00003aa50) (0xc0007ab5e0) Create stream\nI0511 21:46:09.358426    3287 log.go:172] (0xc00003aa50) (0xc0007ab5e0) Stream added, broadcasting: 1\nI0511 21:46:09.360202    3287 log.go:172] (0xc00003aa50) Reply frame received for 1\nI0511 21:46:09.360240    3287 log.go:172] (0xc00003aa50) (0xc000964000) Create stream\nI0511 21:46:09.360252    3287 log.go:172] (0xc00003aa50) (0xc000964000) Stream added, broadcasting: 3\nI0511 21:46:09.361063    3287 log.go:172] (0xc00003aa50) Reply frame received for 3\nI0511 21:46:09.361291    3287 log.go:172] (0xc00003aa50) (0xc0007ab680) Create stream\nI0511 21:46:09.361312    3287 log.go:172] (0xc00003aa50) (0xc0007ab680) Stream added, broadcasting: 5\nI0511 21:46:09.362081    3287 log.go:172] (0xc00003aa50) Reply frame received for 5\nI0511 21:46:09.445987    3287 log.go:172] (0xc00003aa50) Data frame received for 5\nI0511 21:46:09.446026    3287 log.go:172] (0xc0007ab680) (5) Data frame handling\nI0511 21:46:09.446056    3287 log.go:172] (0xc0007ab680) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 21:46:09.471549    3287 log.go:172] (0xc00003aa50) Data frame received for 3\nI0511 21:46:09.471685    3287 log.go:172] (0xc000964000) (3) Data frame handling\nI0511 21:46:09.471775    3287 log.go:172] (0xc000964000) (3) Data frame sent\nI0511 21:46:09.472009    3287 log.go:172] (0xc00003aa50) Data frame received for 3\nI0511 21:46:09.472058    3287 log.go:172] (0xc000964000) (3) Data frame handling\nI0511 21:46:09.472089    3287 log.go:172] (0xc00003aa50) Data frame received for 5\nI0511 21:46:09.472108    3287 log.go:172] (0xc0007ab680) (5) Data frame handling\nI0511 21:46:09.474414    3287 log.go:172] (0xc00003aa50) Data frame received for 1\nI0511 21:46:09.474514    3287 log.go:172] (0xc0007ab5e0) (1) Data frame handling\nI0511 21:46:09.474652    3287 log.go:172] (0xc0007ab5e0) (1) Data frame sent\nI0511 21:46:09.474750    3287 log.go:172] (0xc00003aa50) (0xc0007ab5e0) Stream removed, broadcasting: 1\nI0511 21:46:09.474786    3287 log.go:172] (0xc00003aa50) Go away received\nI0511 21:46:09.475133    3287 log.go:172] (0xc00003aa50) (0xc0007ab5e0) Stream removed, broadcasting: 1\nI0511 21:46:09.475168    3287 log.go:172] (0xc00003aa50) (0xc000964000) Stream removed, broadcasting: 3\nI0511 21:46:09.475178    3287 log.go:172] (0xc00003aa50) (0xc0007ab680) Stream removed, broadcasting: 5\n"
May 11 21:46:09.480: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May 11 21:46:09.480: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May 11 21:46:09.485: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
May 11 21:46:19.488: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
May 11 21:46:19.489: INFO: Waiting for statefulset status.replicas updated to 0
May 11 21:46:19.513: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999619s
May 11 21:46:20.518: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.984323371s
May 11 21:46:21.522: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.979741254s
May 11 21:46:22.732: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.975497284s
May 11 21:46:23.737: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.764852961s
May 11 21:46:24.740: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.760680493s
May 11 21:46:25.749: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.757310583s
May 11 21:46:26.882: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.747830707s
May 11 21:46:27.899: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.615751049s
May 11 21:46:29.074: INFO: Verifying statefulset ss doesn't scale past 1 for another 597.980444ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7812
May 11 21:46:30.175: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7812 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:46:30.449: INFO: stderr: "I0511 21:46:30.367333    3309 log.go:172] (0xc000b7c0b0) (0xc0004f0a00) Create stream\nI0511 21:46:30.367397    3309 log.go:172] (0xc000b7c0b0) (0xc0004f0a00) Stream added, broadcasting: 1\nI0511 21:46:30.372199    3309 log.go:172] (0xc000b7c0b0) Reply frame received for 1\nI0511 21:46:30.372263    3309 log.go:172] (0xc000b7c0b0) (0xc00099c000) Create stream\nI0511 21:46:30.372284    3309 log.go:172] (0xc000b7c0b0) (0xc00099c000) Stream added, broadcasting: 3\nI0511 21:46:30.373347    3309 log.go:172] (0xc000b7c0b0) Reply frame received for 3\nI0511 21:46:30.373380    3309 log.go:172] (0xc000b7c0b0) (0xc00099c0a0) Create stream\nI0511 21:46:30.373389    3309 log.go:172] (0xc000b7c0b0) (0xc00099c0a0) Stream added, broadcasting: 5\nI0511 21:46:30.374288    3309 log.go:172] (0xc000b7c0b0) Reply frame received for 5\nI0511 21:46:30.442838    3309 log.go:172] (0xc000b7c0b0) Data frame received for 3\nI0511 21:46:30.442868    3309 log.go:172] (0xc00099c000) (3) Data frame handling\nI0511 21:46:30.442878    3309 log.go:172] (0xc00099c000) (3) Data frame sent\nI0511 21:46:30.442910    3309 log.go:172] (0xc000b7c0b0) Data frame received for 5\nI0511 21:46:30.442920    3309 log.go:172] (0xc00099c0a0) (5) Data frame handling\nI0511 21:46:30.442931    3309 log.go:172] (0xc00099c0a0) (5) Data frame sent\nI0511 21:46:30.442947    3309 log.go:172] (0xc000b7c0b0) Data frame received for 5\nI0511 21:46:30.442961    3309 log.go:172] (0xc00099c0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 21:46:30.443019    3309 log.go:172] (0xc000b7c0b0) Data frame received for 3\nI0511 21:46:30.443054    3309 log.go:172] (0xc00099c000) (3) Data frame handling\nI0511 21:46:30.444635    3309 log.go:172] (0xc000b7c0b0) Data frame received for 1\nI0511 21:46:30.444658    3309 log.go:172] (0xc0004f0a00) (1) Data frame handling\nI0511 21:46:30.444679    3309 log.go:172] (0xc0004f0a00) (1) Data frame sent\nI0511 21:46:30.444701    3309 log.go:172] (0xc000b7c0b0) (0xc0004f0a00) Stream removed, broadcasting: 1\nI0511 21:46:30.444720    3309 log.go:172] (0xc000b7c0b0) Go away received\nI0511 21:46:30.445020    3309 log.go:172] (0xc000b7c0b0) (0xc0004f0a00) Stream removed, broadcasting: 1\nI0511 21:46:30.445034    3309 log.go:172] (0xc000b7c0b0) (0xc00099c000) Stream removed, broadcasting: 3\nI0511 21:46:30.445040    3309 log.go:172] (0xc000b7c0b0) (0xc00099c0a0) Stream removed, broadcasting: 5\n"
May 11 21:46:30.449: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May 11 21:46:30.449: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May 11 21:46:30.453: INFO: Found 1 stateful pods, waiting for 3
May 11 21:46:40.523: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
May 11 21:46:40.523: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
May 11 21:46:40.523: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
May 11 21:46:50.459: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
May 11 21:46:50.459: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
May 11 21:46:50.459: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
May 11 21:46:50.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7812 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May 11 21:46:50.679: INFO: stderr: "I0511 21:46:50.605075    3329 log.go:172] (0xc0009980b0) (0xc0006e34a0) Create stream\nI0511 21:46:50.605336    3329 log.go:172] (0xc0009980b0) (0xc0006e34a0) Stream added, broadcasting: 1\nI0511 21:46:50.607658    3329 log.go:172] (0xc0009980b0) Reply frame received for 1\nI0511 21:46:50.607711    3329 log.go:172] (0xc0009980b0) (0xc000aea000) Create stream\nI0511 21:46:50.607725    3329 log.go:172] (0xc0009980b0) (0xc000aea000) Stream added, broadcasting: 3\nI0511 21:46:50.608776    3329 log.go:172] (0xc0009980b0) Reply frame received for 3\nI0511 21:46:50.608824    3329 log.go:172] (0xc0009980b0) (0xc000406000) Create stream\nI0511 21:46:50.608839    3329 log.go:172] (0xc0009980b0) (0xc000406000) Stream added, broadcasting: 5\nI0511 21:46:50.610071    3329 log.go:172] (0xc0009980b0) Reply frame received for 5\nI0511 21:46:50.674000    3329 log.go:172] (0xc0009980b0) Data frame received for 3\nI0511 21:46:50.674134    3329 log.go:172] (0xc000aea000) (3) Data frame handling\nI0511 21:46:50.674159    3329 log.go:172] (0xc000aea000) (3) Data frame sent\nI0511 21:46:50.674171    3329 log.go:172] (0xc0009980b0) Data frame received for 3\nI0511 21:46:50.674192    3329 log.go:172] (0xc000aea000) (3) Data frame handling\nI0511 21:46:50.674211    3329 log.go:172] (0xc0009980b0) Data frame received for 5\nI0511 21:46:50.674225    3329 log.go:172] (0xc000406000) (5) Data frame handling\nI0511 21:46:50.674235    3329 log.go:172] (0xc000406000) (5) Data frame sent\nI0511 21:46:50.674244    3329 log.go:172] (0xc0009980b0) Data frame received for 5\nI0511 21:46:50.674254    3329 log.go:172] (0xc000406000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 21:46:50.675807    3329 log.go:172] (0xc0009980b0) Data frame received for 1\nI0511 21:46:50.675823    3329 log.go:172] (0xc0006e34a0) (1) Data frame handling\nI0511 21:46:50.675835    3329 log.go:172] (0xc0006e34a0) (1) Data frame sent\nI0511 21:46:50.675851    3329 log.go:172] (0xc0009980b0) (0xc0006e34a0) Stream removed, broadcasting: 1\nI0511 21:46:50.675864    3329 log.go:172] (0xc0009980b0) Go away received\nI0511 21:46:50.676158    3329 log.go:172] (0xc0009980b0) (0xc0006e34a0) Stream removed, broadcasting: 1\nI0511 21:46:50.676174    3329 log.go:172] (0xc0009980b0) (0xc000aea000) Stream removed, broadcasting: 3\nI0511 21:46:50.676181    3329 log.go:172] (0xc0009980b0) (0xc000406000) Stream removed, broadcasting: 5\n"
May 11 21:46:50.679: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May 11 21:46:50.679: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May 11 21:46:50.679: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7812 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May 11 21:46:50.945: INFO: stderr: "I0511 21:46:50.855703    3350 log.go:172] (0xc0009580b0) (0xc000aec0a0) Create stream\nI0511 21:46:50.855764    3350 log.go:172] (0xc0009580b0) (0xc000aec0a0) Stream added, broadcasting: 1\nI0511 21:46:50.858208    3350 log.go:172] (0xc0009580b0) Reply frame received for 1\nI0511 21:46:50.858236    3350 log.go:172] (0xc0009580b0) (0xc0009012c0) Create stream\nI0511 21:46:50.858243    3350 log.go:172] (0xc0009580b0) (0xc0009012c0) Stream added, broadcasting: 3\nI0511 21:46:50.859044    3350 log.go:172] (0xc0009580b0) Reply frame received for 3\nI0511 21:46:50.859083    3350 log.go:172] (0xc0009580b0) (0xc000aec140) Create stream\nI0511 21:46:50.859091    3350 log.go:172] (0xc0009580b0) (0xc000aec140) Stream added, broadcasting: 5\nI0511 21:46:50.859799    3350 log.go:172] (0xc0009580b0) Reply frame received for 5\nI0511 21:46:50.911634    3350 log.go:172] (0xc0009580b0) Data frame received for 5\nI0511 21:46:50.911661    3350 log.go:172] (0xc000aec140) (5) Data frame handling\nI0511 21:46:50.911682    3350 log.go:172] (0xc000aec140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 21:46:50.938387    3350 log.go:172] (0xc0009580b0) Data frame received for 3\nI0511 21:46:50.938407    3350 log.go:172] (0xc0009012c0) (3) Data frame handling\nI0511 21:46:50.938418    3350 log.go:172] (0xc0009012c0) (3) Data frame sent\nI0511 21:46:50.938678    3350 log.go:172] (0xc0009580b0) Data frame received for 5\nI0511 21:46:50.938700    3350 log.go:172] (0xc000aec140) (5) Data frame handling\nI0511 21:46:50.938796    3350 log.go:172] (0xc0009580b0) Data frame received for 3\nI0511 21:46:50.938817    3350 log.go:172] (0xc0009012c0) (3) Data frame handling\nI0511 21:46:50.940591    3350 log.go:172] (0xc0009580b0) Data frame received for 1\nI0511 21:46:50.940612    3350 log.go:172] (0xc000aec0a0) (1) Data frame handling\nI0511 21:46:50.940636    3350 log.go:172] (0xc000aec0a0) (1) Data frame sent\nI0511 21:46:50.940670    3350 log.go:172] (0xc0009580b0) (0xc000aec0a0) Stream removed, broadcasting: 1\nI0511 21:46:50.940919    3350 log.go:172] (0xc0009580b0) (0xc000aec0a0) Stream removed, broadcasting: 1\nI0511 21:46:50.940933    3350 log.go:172] (0xc0009580b0) (0xc0009012c0) Stream removed, broadcasting: 3\nI0511 21:46:50.941399    3350 log.go:172] (0xc0009580b0) Go away received\nI0511 21:46:50.941457    3350 log.go:172] (0xc0009580b0) (0xc000aec140) Stream removed, broadcasting: 5\n"
May 11 21:46:50.945: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May 11 21:46:50.945: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May 11 21:46:50.946: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7812 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May 11 21:46:51.220: INFO: stderr: "I0511 21:46:51.092392    3369 log.go:172] (0xc000718bb0) (0xc000a301e0) Create stream\nI0511 21:46:51.092460    3369 log.go:172] (0xc000718bb0) (0xc000a301e0) Stream added, broadcasting: 1\nI0511 21:46:51.095966    3369 log.go:172] (0xc000718bb0) Reply frame received for 1\nI0511 21:46:51.096016    3369 log.go:172] (0xc000718bb0) (0xc0007a9180) Create stream\nI0511 21:46:51.096043    3369 log.go:172] (0xc000718bb0) (0xc0007a9180) Stream added, broadcasting: 3\nI0511 21:46:51.097340    3369 log.go:172] (0xc000718bb0) Reply frame received for 3\nI0511 21:46:51.097372    3369 log.go:172] (0xc000718bb0) (0xc0003a2000) Create stream\nI0511 21:46:51.097381    3369 log.go:172] (0xc000718bb0) (0xc0003a2000) Stream added, broadcasting: 5\nI0511 21:46:51.098577    3369 log.go:172] (0xc000718bb0) Reply frame received for 5\nI0511 21:46:51.166500    3369 log.go:172] (0xc000718bb0) Data frame received for 5\nI0511 21:46:51.166526    3369 log.go:172] (0xc0003a2000) (5) Data frame handling\nI0511 21:46:51.166543    3369 log.go:172] (0xc0003a2000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 21:46:51.209820    3369 log.go:172] (0xc000718bb0) Data frame received for 3\nI0511 21:46:51.209841    3369 log.go:172] (0xc0007a9180) (3) Data frame handling\nI0511 21:46:51.209859    3369 log.go:172] (0xc0007a9180) (3) Data frame sent\nI0511 21:46:51.210051    3369 log.go:172] (0xc000718bb0) Data frame received for 3\nI0511 21:46:51.210072    3369 log.go:172] (0xc0007a9180) (3) Data frame handling\nI0511 21:46:51.210295    3369 log.go:172] (0xc000718bb0) Data frame received for 5\nI0511 21:46:51.210351    3369 log.go:172] (0xc0003a2000) (5) Data frame handling\nI0511 21:46:51.212694    3369 log.go:172] (0xc000718bb0) Data frame received for 1\nI0511 21:46:51.212728    3369 log.go:172] (0xc000a301e0) (1) Data frame handling\nI0511 21:46:51.212751    3369 log.go:172] (0xc000a301e0) (1) Data frame sent\nI0511 21:46:51.212777    3369 log.go:172] (0xc000718bb0) (0xc000a301e0) Stream removed, broadcasting: 1\nI0511 21:46:51.212804    3369 log.go:172] (0xc000718bb0) Go away received\nI0511 21:46:51.213282    3369 log.go:172] (0xc000718bb0) (0xc000a301e0) Stream removed, broadcasting: 1\nI0511 21:46:51.213303    3369 log.go:172] (0xc000718bb0) (0xc0007a9180) Stream removed, broadcasting: 3\nI0511 21:46:51.213319    3369 log.go:172] (0xc000718bb0) (0xc0003a2000) Stream removed, broadcasting: 5\n"
May 11 21:46:51.220: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May 11 21:46:51.220: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May 11 21:46:51.220: INFO: Waiting for statefulset status.replicas updated to 0
May 11 21:46:51.223: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
May 11 21:47:01.427: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
May 11 21:47:01.427: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
May 11 21:47:01.427: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
May 11 21:47:01.447: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999402s
May 11 21:47:02.534: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987071747s
May 11 21:47:03.539: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.899996295s
May 11 21:47:04.802: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.894359825s
May 11 21:47:05.807: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.631492945s
May 11 21:47:06.810: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.627279963s
May 11 21:47:07.846: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.62336818s
May 11 21:47:08.864: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.587607374s
May 11 21:47:09.876: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.570063937s
May 11 21:47:10.954: INFO: Verifying statefulset ss doesn't scale past 3 for another 558.141369ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7812
May 11 21:47:11.958: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7812 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:47:12.194: INFO: stderr: "I0511 21:47:12.097329    3392 log.go:172] (0xc000975ce0) (0xc000b32960) Create stream\nI0511 21:47:12.097373    3392 log.go:172] (0xc000975ce0) (0xc000b32960) Stream added, broadcasting: 1\nI0511 21:47:12.100351    3392 log.go:172] (0xc000975ce0) Reply frame received for 1\nI0511 21:47:12.100385    3392 log.go:172] (0xc000975ce0) (0xc000b32000) Create stream\nI0511 21:47:12.100395    3392 log.go:172] (0xc000975ce0) (0xc000b32000) Stream added, broadcasting: 3\nI0511 21:47:12.101089    3392 log.go:172] (0xc000975ce0) Reply frame received for 3\nI0511 21:47:12.101408    3392 log.go:172] (0xc000975ce0) (0xc000b320a0) Create stream\nI0511 21:47:12.101480    3392 log.go:172] (0xc000975ce0) (0xc000b320a0) Stream added, broadcasting: 5\nI0511 21:47:12.103273    3392 log.go:172] (0xc000975ce0) Reply frame received for 5\nI0511 21:47:12.190377    3392 log.go:172] (0xc000975ce0) Data frame received for 3\nI0511 21:47:12.190416    3392 log.go:172] (0xc000b32000) (3) Data frame handling\nI0511 21:47:12.190426    3392 log.go:172] (0xc000b32000) (3) Data frame sent\nI0511 21:47:12.190433    3392 log.go:172] (0xc000975ce0) Data frame received for 3\nI0511 21:47:12.190440    3392 log.go:172] (0xc000b32000) (3) Data frame handling\nI0511 21:47:12.190475    3392 log.go:172] (0xc000975ce0) Data frame received for 5\nI0511 21:47:12.190484    3392 log.go:172] (0xc000b320a0) (5) Data frame handling\nI0511 21:47:12.190493    3392 log.go:172] (0xc000b320a0) (5) Data frame sent\nI0511 21:47:12.190501    3392 log.go:172] (0xc000975ce0) Data frame received for 5\nI0511 21:47:12.190507    3392 log.go:172] (0xc000b320a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 21:47:12.192148    3392 log.go:172] (0xc000975ce0) Data frame received for 1\nI0511 21:47:12.192166    3392 log.go:172] (0xc000b32960) (1) Data frame handling\nI0511 21:47:12.192177    3392 log.go:172] (0xc000b32960) (1) Data frame sent\nI0511 21:47:12.192187    3392 log.go:172] (0xc000975ce0) (0xc000b32960) Stream removed, broadcasting: 1\nI0511 21:47:12.192439    3392 log.go:172] (0xc000975ce0) (0xc000b32960) Stream removed, broadcasting: 1\nI0511 21:47:12.192456    3392 log.go:172] (0xc000975ce0) (0xc000b32000) Stream removed, broadcasting: 3\nI0511 21:47:12.192465    3392 log.go:172] (0xc000975ce0) (0xc000b320a0) Stream removed, broadcasting: 5\n"
May 11 21:47:12.194: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May 11 21:47:12.194: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May 11 21:47:12.194: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7812 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:47:12.384: INFO: stderr: "I0511 21:47:12.304926    3410 log.go:172] (0xc000af58c0) (0xc000914820) Create stream\nI0511 21:47:12.304975    3410 log.go:172] (0xc000af58c0) (0xc000914820) Stream added, broadcasting: 1\nI0511 21:47:12.307397    3410 log.go:172] (0xc000af58c0) Reply frame received for 1\nI0511 21:47:12.307428    3410 log.go:172] (0xc000af58c0) (0xc0006672c0) Create stream\nI0511 21:47:12.307437    3410 log.go:172] (0xc000af58c0) (0xc0006672c0) Stream added, broadcasting: 3\nI0511 21:47:12.308148    3410 log.go:172] (0xc000af58c0) Reply frame received for 3\nI0511 21:47:12.308179    3410 log.go:172] (0xc000af58c0) (0xc00053f680) Create stream\nI0511 21:47:12.308189    3410 log.go:172] (0xc000af58c0) (0xc00053f680) Stream added, broadcasting: 5\nI0511 21:47:12.309505    3410 log.go:172] (0xc000af58c0) Reply frame received for 5\nI0511 21:47:12.381379    3410 log.go:172] (0xc000af58c0) Data frame received for 3\nI0511 21:47:12.381426    3410 log.go:172] (0xc0006672c0) (3) Data frame handling\nI0511 21:47:12.381445    3410 log.go:172] (0xc0006672c0) (3) Data frame sent\nI0511 21:47:12.381464    3410 log.go:172] (0xc000af58c0) Data frame received for 3\nI0511 21:47:12.381477    3410 log.go:172] (0xc0006672c0) (3) Data frame handling\nI0511 21:47:12.381510    3410 log.go:172] (0xc000af58c0) Data frame received for 5\nI0511 21:47:12.381527    3410 log.go:172] (0xc00053f680) (5) Data frame handling\nI0511 21:47:12.381541    3410 log.go:172] (0xc00053f680) (5) Data frame sent\nI0511 21:47:12.381548    3410 log.go:172] (0xc000af58c0) Data frame received for 5\nI0511 21:47:12.381553    3410 log.go:172] (0xc00053f680) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 21:47:12.382544    3410 log.go:172] (0xc000af58c0) Data frame received for 1\nI0511 21:47:12.382565    3410 log.go:172] (0xc000914820) (1) Data frame handling\nI0511 21:47:12.382578    3410 log.go:172] (0xc000914820) (1) Data frame sent\nI0511 21:47:12.382589    3410 log.go:172] (0xc000af58c0) (0xc000914820) Stream removed, broadcasting: 1\nI0511 21:47:12.382636    3410 log.go:172] (0xc000af58c0) Go away received\nI0511 21:47:12.382802    3410 log.go:172] (0xc000af58c0) (0xc000914820) Stream removed, broadcasting: 1\nI0511 21:47:12.382810    3410 log.go:172] (0xc000af58c0) (0xc0006672c0) Stream removed, broadcasting: 3\nI0511 21:47:12.382815    3410 log.go:172] (0xc000af58c0) (0xc00053f680) Stream removed, broadcasting: 5\n"
May 11 21:47:12.384: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May 11 21:47:12.385: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May 11 21:47:12.385: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7812 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:47:12.634: INFO: stderr: "I0511 21:47:12.535282    3429 log.go:172] (0xc00003a0b0) (0xc0004000a0) Create stream\nI0511 21:47:12.535338    3429 log.go:172] (0xc00003a0b0) (0xc0004000a0) Stream added, broadcasting: 1\nI0511 21:47:12.537021    3429 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0511 21:47:12.537044    3429 log.go:172] (0xc00003a0b0) (0xc000400140) Create stream\nI0511 21:47:12.537053    3429 log.go:172] (0xc00003a0b0) (0xc000400140) Stream added, broadcasting: 3\nI0511 21:47:12.537807    3429 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0511 21:47:12.537845    3429 log.go:172] (0xc00003a0b0) (0xc00040f5e0) Create stream\nI0511 21:47:12.537854    3429 log.go:172] (0xc00003a0b0) (0xc00040f5e0) Stream added, broadcasting: 5\nI0511 21:47:12.538569    3429 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0511 21:47:12.624398    3429 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0511 21:47:12.624421    3429 log.go:172] (0xc00040f5e0) (5) Data frame handling\nI0511 21:47:12.624438    3429 log.go:172] (0xc00040f5e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 21:47:12.626441    3429 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0511 21:47:12.626468    3429 log.go:172] (0xc000400140) (3) Data frame handling\nI0511 21:47:12.626484    3429 log.go:172] (0xc000400140) (3) Data frame sent\nI0511 21:47:12.626618    3429 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0511 21:47:12.626652    3429 log.go:172] (0xc00040f5e0) (5) Data frame handling\nI0511 21:47:12.626725    3429 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0511 21:47:12.626744    3429 log.go:172] (0xc000400140) (3) Data frame handling\nI0511 21:47:12.628253    3429 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0511 21:47:12.628268    3429 log.go:172] (0xc0004000a0) (1) Data frame handling\nI0511 21:47:12.628276    3429 log.go:172] (0xc0004000a0) (1) Data frame sent\nI0511 21:47:12.628295    3429 log.go:172] (0xc00003a0b0) (0xc0004000a0) Stream removed, broadcasting: 1\nI0511 21:47:12.628355    3429 log.go:172] (0xc00003a0b0) Go away received\nI0511 21:47:12.628623    3429 log.go:172] (0xc00003a0b0) (0xc0004000a0) Stream removed, broadcasting: 1\nI0511 21:47:12.628646    3429 log.go:172] (0xc00003a0b0) (0xc000400140) Stream removed, broadcasting: 3\nI0511 21:47:12.628656    3429 log.go:172] (0xc00003a0b0) (0xc00040f5e0) Stream removed, broadcasting: 5\n"
May 11 21:47:12.634: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May 11 21:47:12.634: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May 11 21:47:12.634: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
May 11 21:47:42.647: INFO: Deleting all statefulset in ns statefulset-7812
May 11 21:47:42.649: INFO: Scaling statefulset ss to 0
May 11 21:47:42.654: INFO: Waiting for statefulset status.replicas updated to 0
May 11 21:47:42.655: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:47:42.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7812" for this suite.

• [SLOW TEST:103.654 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":192,"skipped":3278,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:47:42.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-67b3660e-2ed7-4d8a-8553-fef700425cd6
STEP: Creating a pod to test consume secrets
May 11 21:47:43.243: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4aabc219-bcc8-4df9-aa82-8b6ab6845b3b" in namespace "projected-3406" to be "Succeeded or Failed"
May 11 21:47:43.311: INFO: Pod "pod-projected-secrets-4aabc219-bcc8-4df9-aa82-8b6ab6845b3b": Phase="Pending", Reason="", readiness=false. Elapsed: 68.007667ms
May 11 21:47:45.374: INFO: Pod "pod-projected-secrets-4aabc219-bcc8-4df9-aa82-8b6ab6845b3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130482258s
May 11 21:47:47.403: INFO: Pod "pod-projected-secrets-4aabc219-bcc8-4df9-aa82-8b6ab6845b3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159236868s
May 11 21:47:49.504: INFO: Pod "pod-projected-secrets-4aabc219-bcc8-4df9-aa82-8b6ab6845b3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.261100158s
STEP: Saw pod success
May 11 21:47:49.505: INFO: Pod "pod-projected-secrets-4aabc219-bcc8-4df9-aa82-8b6ab6845b3b" satisfied condition "Succeeded or Failed"
May 11 21:47:49.978: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-4aabc219-bcc8-4df9-aa82-8b6ab6845b3b container projected-secret-volume-test: 
STEP: delete the pod
May 11 21:47:51.066: INFO: Waiting for pod pod-projected-secrets-4aabc219-bcc8-4df9-aa82-8b6ab6845b3b to disappear
May 11 21:47:51.247: INFO: Pod pod-projected-secrets-4aabc219-bcc8-4df9-aa82-8b6ab6845b3b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:47:51.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3406" for this suite.

• [SLOW TEST:8.634 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":193,"skipped":3285,"failed":0}
SSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:47:51.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
May 11 21:47:51.729: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
May 11 21:47:51.810: INFO: Waiting for terminating namespaces to be deleted...
May 11 21:47:51.817: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
May 11 21:47:51.842: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 11 21:47:51.842: INFO: 	Container kindnet-cni ready: true, restart count 1
May 11 21:47:51.842: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 11 21:47:51.842: INFO: 	Container kube-proxy ready: true, restart count 0
May 11 21:47:51.842: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
May 11 21:47:51.857: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 11 21:47:51.857: INFO: 	Container kindnet-cni ready: true, restart count 0
May 11 21:47:51.857: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 11 21:47:51.857: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.160e179ace0dbcc6], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:47:52.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7058" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":275,"completed":194,"skipped":3291,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:47:53.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 11 21:47:53.237: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3d6d511b-0782-4cbf-8e79-4d5deeee62ad" in namespace "projected-9537" to be "Succeeded or Failed"
May 11 21:47:53.296: INFO: Pod "downwardapi-volume-3d6d511b-0782-4cbf-8e79-4d5deeee62ad": Phase="Pending", Reason="", readiness=false. Elapsed: 59.000441ms
May 11 21:47:55.367: INFO: Pod "downwardapi-volume-3d6d511b-0782-4cbf-8e79-4d5deeee62ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130563863s
May 11 21:47:57.583: INFO: Pod "downwardapi-volume-3d6d511b-0782-4cbf-8e79-4d5deeee62ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.346305126s
May 11 21:47:59.590: INFO: Pod "downwardapi-volume-3d6d511b-0782-4cbf-8e79-4d5deeee62ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.353377282s
STEP: Saw pod success
May 11 21:47:59.590: INFO: Pod "downwardapi-volume-3d6d511b-0782-4cbf-8e79-4d5deeee62ad" satisfied condition "Succeeded or Failed"
May 11 21:47:59.592: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-3d6d511b-0782-4cbf-8e79-4d5deeee62ad container client-container: 
STEP: delete the pod
May 11 21:47:59.629: INFO: Waiting for pod downwardapi-volume-3d6d511b-0782-4cbf-8e79-4d5deeee62ad to disappear
May 11 21:47:59.641: INFO: Pod downwardapi-volume-3d6d511b-0782-4cbf-8e79-4d5deeee62ad no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:47:59.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9537" for this suite.

• [SLOW TEST:6.643 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":195,"skipped":3293,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:47:59.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
May 11 21:48:06.322: INFO: Successfully updated pod "labelsupdate3d369ed9-2b33-40c4-9654-20952e527632"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:48:08.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6087" for this suite.

• [SLOW TEST:8.744 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":196,"skipped":3301,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:48:08.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-24d8c43c-2101-4bec-a1ff-06b9384548d3
STEP: Creating a pod to test consume secrets
May 11 21:48:08.735: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fc8600a7-8536-4c72-b683-f90af2354d7c" in namespace "projected-6409" to be "Succeeded or Failed"
May 11 21:48:08.787: INFO: Pod "pod-projected-secrets-fc8600a7-8536-4c72-b683-f90af2354d7c": Phase="Pending", Reason="", readiness=false. Elapsed: 51.879408ms
May 11 21:48:10.912: INFO: Pod "pod-projected-secrets-fc8600a7-8536-4c72-b683-f90af2354d7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177628819s
May 11 21:48:13.076: INFO: Pod "pod-projected-secrets-fc8600a7-8536-4c72-b683-f90af2354d7c": Phase="Running", Reason="", readiness=true. Elapsed: 4.341187421s
May 11 21:48:15.079: INFO: Pod "pod-projected-secrets-fc8600a7-8536-4c72-b683-f90af2354d7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.344527096s
STEP: Saw pod success
May 11 21:48:15.079: INFO: Pod "pod-projected-secrets-fc8600a7-8536-4c72-b683-f90af2354d7c" satisfied condition "Succeeded or Failed"
May 11 21:48:15.081: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-fc8600a7-8536-4c72-b683-f90af2354d7c container projected-secret-volume-test: 
STEP: delete the pod
May 11 21:48:15.334: INFO: Waiting for pod pod-projected-secrets-fc8600a7-8536-4c72-b683-f90af2354d7c to disappear
May 11 21:48:15.499: INFO: Pod pod-projected-secrets-fc8600a7-8536-4c72-b683-f90af2354d7c no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:48:15.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6409" for this suite.

• [SLOW TEST:7.156 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":197,"skipped":3314,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:48:15.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name projected-secret-test-3b87b4f4-bc03-42e8-9c5f-610dbf881258
STEP: Creating a pod to test consume secrets
May 11 21:48:15.862: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9c6d3aad-88b9-46b2-9e08-2b9530ebeca2" in namespace "projected-5262" to be "Succeeded or Failed"
May 11 21:48:15.882: INFO: Pod "pod-projected-secrets-9c6d3aad-88b9-46b2-9e08-2b9530ebeca2": Phase="Pending", Reason="", readiness=false. Elapsed: 19.76987ms
May 11 21:48:18.320: INFO: Pod "pod-projected-secrets-9c6d3aad-88b9-46b2-9e08-2b9530ebeca2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.457676927s
May 11 21:48:20.535: INFO: Pod "pod-projected-secrets-9c6d3aad-88b9-46b2-9e08-2b9530ebeca2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.673299664s
STEP: Saw pod success
May 11 21:48:20.535: INFO: Pod "pod-projected-secrets-9c6d3aad-88b9-46b2-9e08-2b9530ebeca2" satisfied condition "Succeeded or Failed"
May 11 21:48:20.539: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-9c6d3aad-88b9-46b2-9e08-2b9530ebeca2 container secret-volume-test: 
STEP: delete the pod
May 11 21:48:20.884: INFO: Waiting for pod pod-projected-secrets-9c6d3aad-88b9-46b2-9e08-2b9530ebeca2 to disappear
May 11 21:48:20.888: INFO: Pod pod-projected-secrets-9c6d3aad-88b9-46b2-9e08-2b9530ebeca2 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:48:20.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5262" for this suite.

• [SLOW TEST:5.389 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":198,"skipped":3320,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:48:20.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-e3706c01-0c4f-42d9-81c2-a4e6058cd64a
STEP: Creating a pod to test consume configMaps
May 11 21:48:21.171: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-79961e7a-7c18-42e8-b888-e11481ab3922" in namespace "projected-3912" to be "Succeeded or Failed"
May 11 21:48:21.295: INFO: Pod "pod-projected-configmaps-79961e7a-7c18-42e8-b888-e11481ab3922": Phase="Pending", Reason="", readiness=false. Elapsed: 123.75459ms
May 11 21:48:23.775: INFO: Pod "pod-projected-configmaps-79961e7a-7c18-42e8-b888-e11481ab3922": Phase="Pending", Reason="", readiness=false. Elapsed: 2.603994674s
May 11 21:48:25.780: INFO: Pod "pod-projected-configmaps-79961e7a-7c18-42e8-b888-e11481ab3922": Phase="Pending", Reason="", readiness=false. Elapsed: 4.608556174s
May 11 21:48:27.860: INFO: Pod "pod-projected-configmaps-79961e7a-7c18-42e8-b888-e11481ab3922": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.68847902s
STEP: Saw pod success
May 11 21:48:27.860: INFO: Pod "pod-projected-configmaps-79961e7a-7c18-42e8-b888-e11481ab3922" satisfied condition "Succeeded or Failed"
May 11 21:48:27.895: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-79961e7a-7c18-42e8-b888-e11481ab3922 container projected-configmap-volume-test: 
STEP: delete the pod
May 11 21:48:28.087: INFO: Waiting for pod pod-projected-configmaps-79961e7a-7c18-42e8-b888-e11481ab3922 to disappear
May 11 21:48:28.129: INFO: Pod pod-projected-configmaps-79961e7a-7c18-42e8-b888-e11481ab3922 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:48:28.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3912" for this suite.

• [SLOW TEST:7.200 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":199,"skipped":3342,"failed":0}
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:48:28.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
May 11 21:48:35.118: INFO: Successfully updated pod "labelsupdatefdcb557a-801d-490c-a585-1e9d9c66d51a"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:48:37.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2660" for this suite.

• [SLOW TEST:9.054 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":200,"skipped":3342,"failed":0}
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:48:37.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: running the image docker.io/library/httpd:2.4.38-alpine
May 11 21:48:37.240: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-739'
May 11 21:48:37.349: INFO: stderr: ""
May 11 21:48:37.349: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
May 11 21:48:42.399: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-739 -o json'
May 11 21:48:42.504: INFO: stderr: ""
May 11 21:48:42.504: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-05-11T21:48:37Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"managedFields\": [\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:metadata\": {\n                        \"f:labels\": {\n                            \".\": {},\n                            \"f:run\": {}\n                        }\n                    },\n                    \"f:spec\": {\n                        \"f:containers\": {\n                            \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n                                \".\": {},\n                                \"f:image\": {},\n                                \"f:imagePullPolicy\": {},\n                                \"f:name\": {},\n                                \"f:resources\": {},\n                                \"f:terminationMessagePath\": {},\n                                \"f:terminationMessagePolicy\": {}\n                            }\n                        },\n                        \"f:dnsPolicy\": {},\n                        \"f:enableServiceLinks\": {},\n                        \"f:restartPolicy\": {},\n                        \"f:schedulerName\": {},\n                        \"f:securityContext\": {},\n                        \"f:terminationGracePeriodSeconds\": {}\n                    }\n                },\n                \"manager\": \"kubectl\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-05-11T21:48:37Z\"\n            },\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:status\": {\n                        \"f:conditions\": {\n                            \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            }\n                        },\n                        \"f:containerStatuses\": {},\n                        \"f:hostIP\": {},\n                        \"f:phase\": {},\n                        \"f:podIP\": {},\n                        \"f:podIPs\": {\n                            \".\": {},\n                            \"k:{\\\"ip\\\":\\\"10.244.2.141\\\"}\": {\n                                \".\": {},\n                                \"f:ip\": {}\n                            }\n                        },\n                        \"f:startTime\": {}\n                    }\n                },\n                \"manager\": \"kubelet\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-05-11T21:48:42Z\"\n            }\n        ],\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-739\",\n        \"resourceVersion\": \"3528161\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-739/pods/e2e-test-httpd-pod\",\n        \"uid\": \"68fecaf0-ad42-4c21-94b9-c41a2aa2fcbd\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-6ztwd\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"kali-worker\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-6ztwd\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-6ztwd\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-05-11T21:48:37Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-05-11T21:48:42Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-05-11T21:48:42Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-05-11T21:48:37Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://2d240704d2a6881c767714a50341030d0e1cf1c63cb4146ed6b13a2d0c1eab24\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-05-11T21:48:41Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.17.0.15\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.2.141\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.2.141\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-05-11T21:48:37Z\"\n    }\n}\n"
STEP: replace the image in the pod
May 11 21:48:42.505: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-739'
May 11 21:48:43.082: INFO: stderr: ""
May 11 21:48:43.082: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
May 11 21:48:43.131: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-739'
May 11 21:48:53.752: INFO: stderr: ""
May 11 21:48:53.752: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:48:53.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-739" for this suite.

• [SLOW TEST:16.636 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":275,"completed":201,"skipped":3342,"failed":0}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:48:53.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-b7d535e3-8cf2-4b02-80f2-976fb7ceead8
STEP: Creating configMap with name cm-test-opt-upd-cda7a5d8-476a-4947-abd8-ea40cb5c4393
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-b7d535e3-8cf2-4b02-80f2-976fb7ceead8
STEP: Updating configmap cm-test-opt-upd-cda7a5d8-476a-4947-abd8-ea40cb5c4393
STEP: Creating configMap with name cm-test-opt-create-f67d0c73-3d44-402c-ba68-7eeafa1b36f9
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:49:04.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8291" for this suite.

• [SLOW TEST:10.727 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":202,"skipped":3348,"failed":0}
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:49:04.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-4019
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
May 11 21:49:04.970: INFO: Found 0 stateful pods, waiting for 3
May 11 21:49:14.974: INFO: Found 2 stateful pods, waiting for 3
May 11 21:49:25.009: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May 11 21:49:25.009: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May 11 21:49:25.009: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
May 11 21:49:25.071: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4019 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May 11 21:49:25.620: INFO: stderr: "I0511 21:49:25.199536    3522 log.go:172] (0xc00095c840) (0xc000687720) Create stream\nI0511 21:49:25.199586    3522 log.go:172] (0xc00095c840) (0xc000687720) Stream added, broadcasting: 1\nI0511 21:49:25.205479    3522 log.go:172] (0xc00095c840) Reply frame received for 1\nI0511 21:49:25.205511    3522 log.go:172] (0xc00095c840) (0xc0009ac000) Create stream\nI0511 21:49:25.205524    3522 log.go:172] (0xc00095c840) (0xc0009ac000) Stream added, broadcasting: 3\nI0511 21:49:25.206749    3522 log.go:172] (0xc00095c840) Reply frame received for 3\nI0511 21:49:25.206781    3522 log.go:172] (0xc00095c840) (0xc0009ac0a0) Create stream\nI0511 21:49:25.210384    3522 log.go:172] (0xc00095c840) (0xc0009ac0a0) Stream added, broadcasting: 5\nI0511 21:49:25.211296    3522 log.go:172] (0xc00095c840) Reply frame received for 5\nI0511 21:49:25.264558    3522 log.go:172] (0xc00095c840) Data frame received for 5\nI0511 21:49:25.264583    3522 log.go:172] (0xc0009ac0a0) (5) Data frame handling\nI0511 21:49:25.264605    3522 log.go:172] (0xc0009ac0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 21:49:25.612911    3522 log.go:172] (0xc00095c840) Data frame received for 3\nI0511 21:49:25.612958    3522 log.go:172] (0xc0009ac000) (3) Data frame handling\nI0511 21:49:25.613003    3522 log.go:172] (0xc0009ac000) (3) Data frame sent\nI0511 21:49:25.613307    3522 log.go:172] (0xc00095c840) Data frame received for 5\nI0511 21:49:25.613341    3522 log.go:172] (0xc0009ac0a0) (5) Data frame handling\nI0511 21:49:25.613632    3522 log.go:172] (0xc00095c840) Data frame received for 3\nI0511 21:49:25.613649    3522 log.go:172] (0xc0009ac000) (3) Data frame handling\nI0511 21:49:25.615074    3522 log.go:172] (0xc00095c840) Data frame received for 1\nI0511 21:49:25.615108    3522 log.go:172] (0xc000687720) (1) Data frame handling\nI0511 21:49:25.615139    3522 log.go:172] (0xc000687720) (1) Data frame sent\nI0511 21:49:25.615173    3522 log.go:172] (0xc00095c840) (0xc000687720) Stream removed, broadcasting: 1\nI0511 21:49:25.615208    3522 log.go:172] (0xc00095c840) Go away received\nI0511 21:49:25.615725    3522 log.go:172] (0xc00095c840) (0xc000687720) Stream removed, broadcasting: 1\nI0511 21:49:25.615748    3522 log.go:172] (0xc00095c840) (0xc0009ac000) Stream removed, broadcasting: 3\nI0511 21:49:25.615761    3522 log.go:172] (0xc00095c840) (0xc0009ac0a0) Stream removed, broadcasting: 5\n"
May 11 21:49:25.620: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May 11 21:49:25.620: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
May 11 21:49:35.651: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
May 11 21:49:46.004: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4019 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:49:46.194: INFO: stderr: "I0511 21:49:46.140517    3543 log.go:172] (0xc000a39130) (0xc0009886e0) Create stream\nI0511 21:49:46.140556    3543 log.go:172] (0xc000a39130) (0xc0009886e0) Stream added, broadcasting: 1\nI0511 21:49:46.143311    3543 log.go:172] (0xc000a39130) Reply frame received for 1\nI0511 21:49:46.143379    3543 log.go:172] (0xc000a39130) (0xc0008b8500) Create stream\nI0511 21:49:46.143391    3543 log.go:172] (0xc000a39130) (0xc0008b8500) Stream added, broadcasting: 3\nI0511 21:49:46.144570    3543 log.go:172] (0xc000a39130) Reply frame received for 3\nI0511 21:49:46.144622    3543 log.go:172] (0xc000a39130) (0xc0008b85a0) Create stream\nI0511 21:49:46.144639    3543 log.go:172] (0xc000a39130) (0xc0008b85a0) Stream added, broadcasting: 5\nI0511 21:49:46.145825    3543 log.go:172] (0xc000a39130) Reply frame received for 5\nI0511 21:49:46.188948    3543 log.go:172] (0xc000a39130) Data frame received for 3\nI0511 21:49:46.189003    3543 log.go:172] (0xc0008b8500) (3) Data frame handling\nI0511 21:49:46.189021    3543 log.go:172] (0xc0008b8500) (3) Data frame sent\nI0511 21:49:46.189459    3543 log.go:172] (0xc000a39130) Data frame received for 5\nI0511 21:49:46.189496    3543 log.go:172] (0xc0008b85a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 21:49:46.189566    3543 log.go:172] (0xc000a39130) Data frame received for 3\nI0511 21:49:46.189593    3543 log.go:172] (0xc0008b8500) (3) Data frame handling\nI0511 21:49:46.189622    3543 log.go:172] (0xc0008b85a0) (5) Data frame sent\nI0511 21:49:46.189643    3543 log.go:172] (0xc000a39130) Data frame received for 5\nI0511 21:49:46.189667    3543 log.go:172] (0xc0008b85a0) (5) Data frame handling\nI0511 21:49:46.190635    3543 log.go:172] (0xc000a39130) Data frame received for 1\nI0511 21:49:46.190726    3543 log.go:172] (0xc0009886e0) (1) Data frame handling\nI0511 21:49:46.190824    3543 log.go:172] (0xc0009886e0) (1) Data frame sent\nI0511 21:49:46.190918    3543 log.go:172] (0xc000a39130) (0xc0009886e0) Stream removed, broadcasting: 1\nI0511 21:49:46.190954    3543 log.go:172] (0xc000a39130) Go away received\nI0511 21:49:46.191181    3543 log.go:172] (0xc000a39130) (0xc0009886e0) Stream removed, broadcasting: 1\nI0511 21:49:46.191193    3543 log.go:172] (0xc000a39130) (0xc0008b8500) Stream removed, broadcasting: 3\nI0511 21:49:46.191200    3543 log.go:172] (0xc000a39130) (0xc0008b85a0) Stream removed, broadcasting: 5\n"
May 11 21:49:46.194: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May 11 21:49:46.194: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May 11 21:50:17.258: INFO: Waiting for StatefulSet statefulset-4019/ss2 to complete update
STEP: Rolling back to a previous revision
May 11 21:50:27.263: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4019 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May 11 21:50:27.509: INFO: stderr: "I0511 21:50:27.376729    3562 log.go:172] (0xc0000e8420) (0xc0005c2960) Create stream\nI0511 21:50:27.376784    3562 log.go:172] (0xc0000e8420) (0xc0005c2960) Stream added, broadcasting: 1\nI0511 21:50:27.378235    3562 log.go:172] (0xc0000e8420) Reply frame received for 1\nI0511 21:50:27.378262    3562 log.go:172] (0xc0000e8420) (0xc000a64000) Create stream\nI0511 21:50:27.378273    3562 log.go:172] (0xc0000e8420) (0xc000a64000) Stream added, broadcasting: 3\nI0511 21:50:27.378964    3562 log.go:172] (0xc0000e8420) Reply frame received for 3\nI0511 21:50:27.379006    3562 log.go:172] (0xc0000e8420) (0xc0003ec000) Create stream\nI0511 21:50:27.379015    3562 log.go:172] (0xc0000e8420) (0xc0003ec000) Stream added, broadcasting: 5\nI0511 21:50:27.379666    3562 log.go:172] (0xc0000e8420) Reply frame received for 5\nI0511 21:50:27.479710    3562 log.go:172] (0xc0000e8420) Data frame received for 5\nI0511 21:50:27.479727    3562 log.go:172] (0xc0003ec000) (5) Data frame handling\nI0511 21:50:27.479738    3562 log.go:172] (0xc0003ec000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 21:50:27.505825    3562 log.go:172] (0xc0000e8420) Data frame received for 5\nI0511 21:50:27.505860    3562 log.go:172] (0xc0003ec000) (5) Data frame handling\nI0511 21:50:27.505882    3562 log.go:172] (0xc0000e8420) Data frame received for 3\nI0511 21:50:27.505891    3562 log.go:172] (0xc000a64000) (3) Data frame handling\nI0511 21:50:27.505899    3562 log.go:172] (0xc000a64000) (3) Data frame sent\nI0511 21:50:27.505906    3562 log.go:172] (0xc0000e8420) Data frame received for 3\nI0511 21:50:27.505913    3562 log.go:172] (0xc000a64000) (3) Data frame handling\nI0511 21:50:27.507061    3562 log.go:172] (0xc0000e8420) Data frame received for 1\nI0511 21:50:27.507077    3562 log.go:172] (0xc0005c2960) (1) Data frame handling\nI0511 21:50:27.507092    3562 log.go:172] (0xc0005c2960) (1) Data frame sent\nI0511 21:50:27.507102    3562 log.go:172] (0xc0000e8420) (0xc0005c2960) Stream removed, broadcasting: 1\nI0511 21:50:27.507179    3562 log.go:172] (0xc0000e8420) Go away received\nI0511 21:50:27.507332    3562 log.go:172] (0xc0000e8420) (0xc0005c2960) Stream removed, broadcasting: 1\nI0511 21:50:27.507344    3562 log.go:172] (0xc0000e8420) (0xc000a64000) Stream removed, broadcasting: 3\nI0511 21:50:27.507352    3562 log.go:172] (0xc0000e8420) (0xc0003ec000) Stream removed, broadcasting: 5\n"
May 11 21:50:27.509: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May 11 21:50:27.509: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May 11 21:50:37.538: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
May 11 21:50:47.585: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4019 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 11 21:50:47.762: INFO: stderr: "I0511 21:50:47.704640    3581 log.go:172] (0xc0005ea0b0) (0xc000432be0) Create stream\nI0511 21:50:47.704675    3581 log.go:172] (0xc0005ea0b0) (0xc000432be0) Stream added, broadcasting: 1\nI0511 21:50:47.706135    3581 log.go:172] (0xc0005ea0b0) Reply frame received for 1\nI0511 21:50:47.706156    3581 log.go:172] (0xc0005ea0b0) (0xc00099e000) Create stream\nI0511 21:50:47.706163    3581 log.go:172] (0xc0005ea0b0) (0xc00099e000) Stream added, broadcasting: 3\nI0511 21:50:47.706809    3581 log.go:172] (0xc0005ea0b0) Reply frame received for 3\nI0511 21:50:47.706844    3581 log.go:172] (0xc0005ea0b0) (0xc000689360) Create stream\nI0511 21:50:47.706853    3581 log.go:172] (0xc0005ea0b0) (0xc000689360) Stream added, broadcasting: 5\nI0511 21:50:47.707592    3581 log.go:172] (0xc0005ea0b0) Reply frame received for 5\nI0511 21:50:47.759404    3581 log.go:172] (0xc0005ea0b0) Data frame received for 3\nI0511 21:50:47.759427    3581 log.go:172] (0xc00099e000) (3) Data frame handling\nI0511 21:50:47.759433    3581 log.go:172] (0xc00099e000) (3) Data frame sent\nI0511 21:50:47.759437    3581 log.go:172] (0xc0005ea0b0) Data frame received for 3\nI0511 21:50:47.759441    3581 log.go:172] (0xc00099e000) (3) Data frame handling\nI0511 21:50:47.759456    3581 log.go:172] (0xc0005ea0b0) Data frame received for 5\nI0511 21:50:47.759460    3581 log.go:172] (0xc000689360) (5) Data frame handling\nI0511 21:50:47.759464    3581 log.go:172] (0xc000689360) (5) Data frame sent\nI0511 21:50:47.759468    3581 log.go:172] (0xc0005ea0b0) Data frame received for 5\nI0511 21:50:47.759474    3581 log.go:172] (0xc000689360) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 21:50:47.760263    3581 log.go:172] (0xc0005ea0b0) Data frame received for 1\nI0511 21:50:47.760283    3581 log.go:172] (0xc000432be0) (1) Data frame handling\nI0511 21:50:47.760296    3581 log.go:172] (0xc000432be0) (1) Data frame sent\nI0511 21:50:47.760313    3581 log.go:172] (0xc0005ea0b0) (0xc000432be0) Stream removed, broadcasting: 1\nI0511 21:50:47.760362    3581 log.go:172] (0xc0005ea0b0) Go away received\nI0511 21:50:47.760521    3581 log.go:172] (0xc0005ea0b0) (0xc000432be0) Stream removed, broadcasting: 1\nI0511 21:50:47.760540    3581 log.go:172] (0xc0005ea0b0) (0xc00099e000) Stream removed, broadcasting: 3\nI0511 21:50:47.760552    3581 log.go:172] (0xc0005ea0b0) (0xc000689360) Stream removed, broadcasting: 5\n"
May 11 21:50:47.762: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May 11 21:50:47.762: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May 11 21:50:57.778: INFO: Waiting for StatefulSet statefulset-4019/ss2 to complete update
May 11 21:50:57.778: INFO: Waiting for Pod statefulset-4019/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
May 11 21:50:57.778: INFO: Waiting for Pod statefulset-4019/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
May 11 21:50:57.778: INFO: Waiting for Pod statefulset-4019/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
May 11 21:51:08.169: INFO: Waiting for StatefulSet statefulset-4019/ss2 to complete update
May 11 21:51:08.169: INFO: Waiting for Pod statefulset-4019/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
May 11 21:51:08.169: INFO: Waiting for Pod statefulset-4019/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
May 11 21:51:18.111: INFO: Waiting for StatefulSet statefulset-4019/ss2 to complete update
May 11 21:51:18.111: INFO: Waiting for Pod statefulset-4019/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
May 11 21:51:28.258: INFO: Waiting for StatefulSet statefulset-4019/ss2 to complete update
May 11 21:51:28.258: INFO: Waiting for Pod statefulset-4019/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
May 11 21:51:37.784: INFO: Deleting all statefulset in ns statefulset-4019
May 11 21:51:37.786: INFO: Scaling statefulset ss2 to 0
May 11 21:52:07.803: INFO: Waiting for statefulset status.replicas updated to 0
May 11 21:52:07.806: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:52:07.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4019" for this suite.

• [SLOW TEST:183.274 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":203,"skipped":3351,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:52:07.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 11 21:52:08.285: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created
May 11 21:52:10.293: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830728, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830728, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830728, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830728, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:52:13.490: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830728, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830728, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830728, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830728, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 21:52:14.484: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830728, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830728, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830728, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830728, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 11 21:52:17.660: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:52:21.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-187" for this suite.
STEP: Destroying namespace "webhook-187-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:16.456 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":204,"skipped":3371,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:52:24.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
May 11 21:52:24.659: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:52:34.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-302" for this suite.

• [SLOW TEST:10.041 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":205,"skipped":3379,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:52:34.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-d0de271a-317d-4600-9509-1cabdb9c024f
STEP: Creating a pod to test consume secrets
May 11 21:52:35.020: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d71c437b-4b40-472b-a785-cd72ea71a118" in namespace "projected-4202" to be "Succeeded or Failed"
May 11 21:52:35.075: INFO: Pod "pod-projected-secrets-d71c437b-4b40-472b-a785-cd72ea71a118": Phase="Pending", Reason="", readiness=false. Elapsed: 54.245475ms
May 11 21:52:37.079: INFO: Pod "pod-projected-secrets-d71c437b-4b40-472b-a785-cd72ea71a118": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059007385s
May 11 21:52:39.083: INFO: Pod "pod-projected-secrets-d71c437b-4b40-472b-a785-cd72ea71a118": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062890819s
May 11 21:52:41.136: INFO: Pod "pod-projected-secrets-d71c437b-4b40-472b-a785-cd72ea71a118": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.115543926s
STEP: Saw pod success
May 11 21:52:41.136: INFO: Pod "pod-projected-secrets-d71c437b-4b40-472b-a785-cd72ea71a118" satisfied condition "Succeeded or Failed"
May 11 21:52:41.139: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-d71c437b-4b40-472b-a785-cd72ea71a118 container projected-secret-volume-test: 
STEP: delete the pod
May 11 21:52:41.205: INFO: Waiting for pod pod-projected-secrets-d71c437b-4b40-472b-a785-cd72ea71a118 to disappear
May 11 21:52:41.357: INFO: Pod pod-projected-secrets-d71c437b-4b40-472b-a785-cd72ea71a118 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:52:41.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4202" for this suite.

• [SLOW TEST:7.036 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":206,"skipped":3404,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:52:41.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:52:48.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-339" for this suite.

• [SLOW TEST:9.042 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":207,"skipped":3421,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:52:50.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating server pod server in namespace prestop-3738
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-3738
STEP: Deleting pre-stop pod
May 11 21:53:10.293: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:53:10.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-3738" for this suite.

• [SLOW TEST:20.634 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":275,"completed":208,"skipped":3451,"failed":0}
SSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:53:11.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:53:20.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5563" for this suite.

• [SLOW TEST:9.172 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":209,"skipped":3454,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:53:20.214: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
May 11 21:53:20.693: INFO: Waiting up to 5m0s for pod "pod-5bf32800-859a-44b8-9e06-496ea15bcd06" in namespace "emptydir-9975" to be "Succeeded or Failed"
May 11 21:53:20.704: INFO: Pod "pod-5bf32800-859a-44b8-9e06-496ea15bcd06": Phase="Pending", Reason="", readiness=false. Elapsed: 11.084796ms
May 11 21:53:22.708: INFO: Pod "pod-5bf32800-859a-44b8-9e06-496ea15bcd06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015202216s
May 11 21:53:24.987: INFO: Pod "pod-5bf32800-859a-44b8-9e06-496ea15bcd06": Phase="Pending", Reason="", readiness=false. Elapsed: 4.293976867s
May 11 21:53:27.033: INFO: Pod "pod-5bf32800-859a-44b8-9e06-496ea15bcd06": Phase="Pending", Reason="", readiness=false. Elapsed: 6.339443517s
May 11 21:53:29.083: INFO: Pod "pod-5bf32800-859a-44b8-9e06-496ea15bcd06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.389932143s
STEP: Saw pod success
May 11 21:53:29.083: INFO: Pod "pod-5bf32800-859a-44b8-9e06-496ea15bcd06" satisfied condition "Succeeded or Failed"
May 11 21:53:29.086: INFO: Trying to get logs from node kali-worker pod pod-5bf32800-859a-44b8-9e06-496ea15bcd06 container test-container: 
STEP: delete the pod
May 11 21:53:29.171: INFO: Waiting for pod pod-5bf32800-859a-44b8-9e06-496ea15bcd06 to disappear
May 11 21:53:29.298: INFO: Pod pod-5bf32800-859a-44b8-9e06-496ea15bcd06 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:53:29.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9975" for this suite.

• [SLOW TEST:9.090 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":210,"skipped":3466,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:53:29.304: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name secret-emptykey-test-ab7053e6-e3e3-4799-a2e3-64f63c2a444f
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:53:29.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1702" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":211,"skipped":3471,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:53:29.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 21:53:29.449: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:53:38.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5592" for this suite.

• [SLOW TEST:9.184 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":275,"completed":212,"skipped":3474,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:53:38.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:53:45.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6658" for this suite.

• [SLOW TEST:7.423 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":213,"skipped":3481,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:53:45.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:54:04.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9994" for this suite.
STEP: Destroying namespace "nsdeletetest-1823" for this suite.
May 11 21:54:04.783: INFO: Namespace nsdeletetest-1823 was already deleted
STEP: Destroying namespace "nsdeletetest-1369" for this suite.

• [SLOW TEST:18.798 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":214,"skipped":3503,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:54:04.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9317.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-9317.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9317.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9317.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-9317.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9317.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May 11 21:54:12.968: INFO: DNS probes using dns-9317/dns-test-482fe74d-d37b-4043-9f10-83f2551e1fbc succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:54:13.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9317" for this suite.

• [SLOW TEST:8.335 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":215,"skipped":3515,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:54:13.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service multi-endpoint-test in namespace services-8859
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8859 to expose endpoints map[]
May 11 21:54:13.651: INFO: Get endpoints failed (60.952489ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
May 11 21:54:14.681: INFO: successfully validated that service multi-endpoint-test in namespace services-8859 exposes endpoints map[] (1.091299711s elapsed)
STEP: Creating pod pod1 in namespace services-8859
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8859 to expose endpoints map[pod1:[100]]
May 11 21:54:19.396: INFO: successfully validated that service multi-endpoint-test in namespace services-8859 exposes endpoints map[pod1:[100]] (4.695361888s elapsed)
STEP: Creating pod pod2 in namespace services-8859
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8859 to expose endpoints map[pod1:[100] pod2:[101]]
May 11 21:54:23.545: INFO: successfully validated that service multi-endpoint-test in namespace services-8859 exposes endpoints map[pod1:[100] pod2:[101]] (4.145745451s elapsed)
STEP: Deleting pod pod1 in namespace services-8859
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8859 to expose endpoints map[pod2:[101]]
May 11 21:54:24.606: INFO: successfully validated that service multi-endpoint-test in namespace services-8859 exposes endpoints map[pod2:[101]] (1.055994885s elapsed)
STEP: Deleting pod pod2 in namespace services-8859
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8859 to expose endpoints map[]
May 11 21:54:25.642: INFO: successfully validated that service multi-endpoint-test in namespace services-8859 exposes endpoints map[] (1.03343855s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:54:26.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8859" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:13.034 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":275,"completed":216,"skipped":3558,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:54:26.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
May 11 21:54:26.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:54:40.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4161" for this suite.

• [SLOW TEST:14.694 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":217,"skipped":3621,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:54:40.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 21:54:41.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:54:46.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4133" for this suite.

• [SLOW TEST:5.212 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":218,"skipped":3646,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:54:46.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-1121ba40-592f-481c-8d6f-71d692661820
STEP: Creating a pod to test consume secrets
May 11 21:54:46.183: INFO: Waiting up to 5m0s for pod "pod-secrets-8646b63c-c618-4f10-896f-addcb4fccfcb" in namespace "secrets-5097" to be "Succeeded or Failed"
May 11 21:54:46.244: INFO: Pod "pod-secrets-8646b63c-c618-4f10-896f-addcb4fccfcb": Phase="Pending", Reason="", readiness=false. Elapsed: 60.544256ms
May 11 21:54:48.248: INFO: Pod "pod-secrets-8646b63c-c618-4f10-896f-addcb4fccfcb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064914801s
May 11 21:54:50.252: INFO: Pod "pod-secrets-8646b63c-c618-4f10-896f-addcb4fccfcb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068495511s
May 11 21:54:52.256: INFO: Pod "pod-secrets-8646b63c-c618-4f10-896f-addcb4fccfcb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.072694087s
STEP: Saw pod success
May 11 21:54:52.256: INFO: Pod "pod-secrets-8646b63c-c618-4f10-896f-addcb4fccfcb" satisfied condition "Succeeded or Failed"
May 11 21:54:52.259: INFO: Trying to get logs from node kali-worker pod pod-secrets-8646b63c-c618-4f10-896f-addcb4fccfcb container secret-volume-test: 
STEP: delete the pod
May 11 21:54:52.493: INFO: Waiting for pod pod-secrets-8646b63c-c618-4f10-896f-addcb4fccfcb to disappear
May 11 21:54:52.520: INFO: Pod pod-secrets-8646b63c-c618-4f10-896f-addcb4fccfcb no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:54:52.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5097" for this suite.

• [SLOW TEST:6.505 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":219,"skipped":3692,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:54:52.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-8613
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-8613
STEP: Creating statefulset with conflicting port in namespace statefulset-8613
STEP: Waiting until pod test-pod will start running in namespace statefulset-8613
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8613
May 11 21:55:02.300: INFO: Observed stateful pod in namespace: statefulset-8613, name: ss-0, uid: c9c600f0-ec51-4890-ae09-31324dddcb5b, status phase: Pending. Waiting for statefulset controller to delete.
May 11 21:55:02.455: INFO: Observed stateful pod in namespace: statefulset-8613, name: ss-0, uid: c9c600f0-ec51-4890-ae09-31324dddcb5b, status phase: Failed. Waiting for statefulset controller to delete.
May 11 21:55:02.548: INFO: Observed stateful pod in namespace: statefulset-8613, name: ss-0, uid: c9c600f0-ec51-4890-ae09-31324dddcb5b, status phase: Failed. Waiting for statefulset controller to delete.
May 11 21:55:02.618: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8613
STEP: Removing pod with conflicting port in namespace statefulset-8613
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8613 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
May 11 21:55:08.964: INFO: Deleting all statefulset in ns statefulset-8613
May 11 21:55:08.967: INFO: Scaling statefulset ss to 0
May 11 21:55:19.074: INFO: Waiting for statefulset status.replicas updated to 0
May 11 21:55:19.076: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:55:19.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8613" for this suite.

• [SLOW TEST:27.526 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":220,"skipped":3704,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:55:20.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-ff5e4c42-a73a-4c40-b8b6-c05ad7da9276 in namespace container-probe-1073
May 11 21:55:27.898: INFO: Started pod liveness-ff5e4c42-a73a-4c40-b8b6-c05ad7da9276 in namespace container-probe-1073
STEP: checking the pod's current state and verifying that restartCount is present
May 11 21:55:27.899: INFO: Initial restart count of pod liveness-ff5e4c42-a73a-4c40-b8b6-c05ad7da9276 is 0
May 11 21:55:47.985: INFO: Restart count of pod container-probe-1073/liveness-ff5e4c42-a73a-4c40-b8b6-c05ad7da9276 is now 1 (20.08543842s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:55:48.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1073" for this suite.

• [SLOW TEST:28.092 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":221,"skipped":3726,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:55:48.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-1dddc972-568e-4a5a-9083-953eb0f81479
STEP: Creating secret with name s-test-opt-upd-54d5b0e2-ead7-45f0-ba0a-09ad96d9db0f
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-1dddc972-568e-4a5a-9083-953eb0f81479
STEP: Updating secret s-test-opt-upd-54d5b0e2-ead7-45f0-ba0a-09ad96d9db0f
STEP: Creating secret with name s-test-opt-create-197debcb-decf-4073-9b23-ad5041ac52b5
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:57:28.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1326" for this suite.

• [SLOW TEST:100.550 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":222,"skipped":3736,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:57:28.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on node default medium
May 11 21:57:28.869: INFO: Waiting up to 5m0s for pod "pod-b2e1c491-1821-42e3-8bad-909fa112b74d" in namespace "emptydir-9951" to be "Succeeded or Failed"
May 11 21:57:28.891: INFO: Pod "pod-b2e1c491-1821-42e3-8bad-909fa112b74d": Phase="Pending", Reason="", readiness=false. Elapsed: 22.305555ms
May 11 21:57:31.508: INFO: Pod "pod-b2e1c491-1821-42e3-8bad-909fa112b74d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.638933785s
May 11 21:57:33.833: INFO: Pod "pod-b2e1c491-1821-42e3-8bad-909fa112b74d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.963820455s
May 11 21:57:36.397: INFO: Pod "pod-b2e1c491-1821-42e3-8bad-909fa112b74d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.527612643s
STEP: Saw pod success
May 11 21:57:36.397: INFO: Pod "pod-b2e1c491-1821-42e3-8bad-909fa112b74d" satisfied condition "Succeeded or Failed"
May 11 21:57:36.578: INFO: Trying to get logs from node kali-worker pod pod-b2e1c491-1821-42e3-8bad-909fa112b74d container test-container: 
STEP: delete the pod
May 11 21:57:36.838: INFO: Waiting for pod pod-b2e1c491-1821-42e3-8bad-909fa112b74d to disappear
May 11 21:57:36.910: INFO: Pod pod-b2e1c491-1821-42e3-8bad-909fa112b74d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:57:36.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9951" for this suite.

• [SLOW TEST:8.179 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":223,"skipped":3737,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:57:36.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-db398fec-444d-4473-a3f4-5b8a473f6715
STEP: Creating a pod to test consume secrets
May 11 21:57:37.475: INFO: Waiting up to 5m0s for pod "pod-secrets-cd243577-c7d6-47ed-8032-e90201dad1a1" in namespace "secrets-4557" to be "Succeeded or Failed"
May 11 21:57:37.528: INFO: Pod "pod-secrets-cd243577-c7d6-47ed-8032-e90201dad1a1": Phase="Pending", Reason="", readiness=false. Elapsed: 53.268561ms
May 11 21:57:39.755: INFO: Pod "pod-secrets-cd243577-c7d6-47ed-8032-e90201dad1a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.280140077s
May 11 21:57:41.759: INFO: Pod "pod-secrets-cd243577-c7d6-47ed-8032-e90201dad1a1": Phase="Running", Reason="", readiness=true. Elapsed: 4.28424082s
May 11 21:57:43.762: INFO: Pod "pod-secrets-cd243577-c7d6-47ed-8032-e90201dad1a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.287621001s
STEP: Saw pod success
May 11 21:57:43.762: INFO: Pod "pod-secrets-cd243577-c7d6-47ed-8032-e90201dad1a1" satisfied condition "Succeeded or Failed"
May 11 21:57:43.764: INFO: Trying to get logs from node kali-worker pod pod-secrets-cd243577-c7d6-47ed-8032-e90201dad1a1 container secret-volume-test: 
STEP: delete the pod
May 11 21:57:43.804: INFO: Waiting for pod pod-secrets-cd243577-c7d6-47ed-8032-e90201dad1a1 to disappear
May 11 21:57:43.916: INFO: Pod pod-secrets-cd243577-c7d6-47ed-8032-e90201dad1a1 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:57:43.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4557" for this suite.

• [SLOW TEST:7.075 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":224,"skipped":3771,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:57:43.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-2529d6df-a15d-43b7-b379-2970fd83574e
STEP: Creating secret with name s-test-opt-upd-db66b2aa-e63f-46b1-b936-c9e40e71911a
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-2529d6df-a15d-43b7-b379-2970fd83574e
STEP: Updating secret s-test-opt-upd-db66b2aa-e63f-46b1-b936-c9e40e71911a
STEP: Creating secret with name s-test-opt-create-5d3da8af-6306-4020-bc95-811ab2c074a4
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:59:03.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4720" for this suite.

• [SLOW TEST:79.927 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":225,"skipped":3794,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:59:03.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service nodeport-test with type=NodePort in namespace services-9992
STEP: creating replication controller nodeport-test in namespace services-9992
I0511 21:59:04.937062       7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-9992, replica count: 2
I0511 21:59:07.987648       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0511 21:59:10.987846       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May 11 21:59:10.987: INFO: Creating new exec pod
May 11 21:59:19.465: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-9992 execpodqbmvk -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
May 11 21:59:23.040: INFO: stderr: "I0511 21:59:22.938776    3603 log.go:172] (0xc0008dc8f0) (0xc0008aa1e0) Create stream\nI0511 21:59:22.938807    3603 log.go:172] (0xc0008dc8f0) (0xc0008aa1e0) Stream added, broadcasting: 1\nI0511 21:59:22.940889    3603 log.go:172] (0xc0008dc8f0) Reply frame received for 1\nI0511 21:59:22.940923    3603 log.go:172] (0xc0008dc8f0) (0xc0007db680) Create stream\nI0511 21:59:22.940934    3603 log.go:172] (0xc0008dc8f0) (0xc0007db680) Stream added, broadcasting: 3\nI0511 21:59:22.941936    3603 log.go:172] (0xc0008dc8f0) Reply frame received for 3\nI0511 21:59:22.941964    3603 log.go:172] (0xc0008dc8f0) (0xc000a72000) Create stream\nI0511 21:59:22.941989    3603 log.go:172] (0xc0008dc8f0) (0xc000a72000) Stream added, broadcasting: 5\nI0511 21:59:22.942787    3603 log.go:172] (0xc0008dc8f0) Reply frame received for 5\nI0511 21:59:23.033011    3603 log.go:172] (0xc0008dc8f0) Data frame received for 5\nI0511 21:59:23.033409    3603 log.go:172] (0xc000a72000) (5) Data frame handling\nI0511 21:59:23.033533    3603 log.go:172] (0xc000a72000) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0511 21:59:23.033711    3603 log.go:172] (0xc0008dc8f0) Data frame received for 5\nI0511 21:59:23.033795    3603 log.go:172] (0xc000a72000) (5) Data frame handling\nI0511 21:59:23.033904    3603 log.go:172] (0xc000a72000) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0511 21:59:23.033941    3603 log.go:172] (0xc0008dc8f0) Data frame received for 3\nI0511 21:59:23.033998    3603 log.go:172] (0xc0007db680) (3) Data frame handling\nI0511 21:59:23.034040    3603 log.go:172] (0xc0008dc8f0) Data frame received for 5\nI0511 21:59:23.034062    3603 log.go:172] (0xc000a72000) (5) Data frame handling\nI0511 21:59:23.035995    3603 log.go:172] (0xc0008dc8f0) Data frame received for 1\nI0511 21:59:23.036025    3603 log.go:172] (0xc0008aa1e0) (1) Data frame handling\nI0511 21:59:23.036037    3603 log.go:172] (0xc0008aa1e0) (1) Data frame sent\nI0511 21:59:23.036157    3603 log.go:172] (0xc0008dc8f0) (0xc0008aa1e0) Stream removed, broadcasting: 1\nI0511 21:59:23.036260    3603 log.go:172] (0xc0008dc8f0) Go away received\nI0511 21:59:23.036574    3603 log.go:172] (0xc0008dc8f0) (0xc0008aa1e0) Stream removed, broadcasting: 1\nI0511 21:59:23.036596    3603 log.go:172] (0xc0008dc8f0) (0xc0007db680) Stream removed, broadcasting: 3\nI0511 21:59:23.036607    3603 log.go:172] (0xc0008dc8f0) (0xc000a72000) Stream removed, broadcasting: 5\n"
May 11 21:59:23.040: INFO: stdout: ""
May 11 21:59:23.041: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-9992 execpodqbmvk -- /bin/sh -x -c nc -zv -t -w 2 10.105.132.166 80'
May 11 21:59:23.227: INFO: stderr: "I0511 21:59:23.169833    3635 log.go:172] (0xc00010ed10) (0xc0000c6320) Create stream\nI0511 21:59:23.169907    3635 log.go:172] (0xc00010ed10) (0xc0000c6320) Stream added, broadcasting: 1\nI0511 21:59:23.172721    3635 log.go:172] (0xc00010ed10) Reply frame received for 1\nI0511 21:59:23.172761    3635 log.go:172] (0xc00010ed10) (0xc000677220) Create stream\nI0511 21:59:23.172775    3635 log.go:172] (0xc00010ed10) (0xc000677220) Stream added, broadcasting: 3\nI0511 21:59:23.173918    3635 log.go:172] (0xc00010ed10) Reply frame received for 3\nI0511 21:59:23.174004    3635 log.go:172] (0xc00010ed10) (0xc000677400) Create stream\nI0511 21:59:23.174062    3635 log.go:172] (0xc00010ed10) (0xc000677400) Stream added, broadcasting: 5\nI0511 21:59:23.176336    3635 log.go:172] (0xc00010ed10) Reply frame received for 5\nI0511 21:59:23.222526    3635 log.go:172] (0xc00010ed10) Data frame received for 3\nI0511 21:59:23.222552    3635 log.go:172] (0xc000677220) (3) Data frame handling\nI0511 21:59:23.222568    3635 log.go:172] (0xc00010ed10) Data frame received for 5\nI0511 21:59:23.222575    3635 log.go:172] (0xc000677400) (5) Data frame handling\nI0511 21:59:23.222583    3635 log.go:172] (0xc000677400) (5) Data frame sent\nI0511 21:59:23.222590    3635 log.go:172] (0xc00010ed10) Data frame received for 5\nI0511 21:59:23.222596    3635 log.go:172] (0xc000677400) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.132.166 80\nConnection to 10.105.132.166 80 port [tcp/http] succeeded!\nI0511 21:59:23.223598    3635 log.go:172] (0xc00010ed10) Data frame received for 1\nI0511 21:59:23.223618    3635 log.go:172] (0xc0000c6320) (1) Data frame handling\nI0511 21:59:23.223629    3635 log.go:172] (0xc0000c6320) (1) Data frame sent\nI0511 21:59:23.223641    3635 log.go:172] (0xc00010ed10) (0xc0000c6320) Stream removed, broadcasting: 1\nI0511 21:59:23.223882    3635 log.go:172] (0xc00010ed10) (0xc0000c6320) Stream removed, broadcasting: 1\nI0511 21:59:23.223903    3635 log.go:172] (0xc00010ed10) (0xc000677220) Stream removed, broadcasting: 3\nI0511 21:59:23.224021    3635 log.go:172] (0xc00010ed10) (0xc000677400) Stream removed, broadcasting: 5\n"
May 11 21:59:23.227: INFO: stdout: ""
May 11 21:59:23.227: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-9992 execpodqbmvk -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.15 32301'
May 11 21:59:23.442: INFO: stderr: "I0511 21:59:23.354082    3655 log.go:172] (0xc0003c7ef0) (0xc00063a1e0) Create stream\nI0511 21:59:23.354158    3655 log.go:172] (0xc0003c7ef0) (0xc00063a1e0) Stream added, broadcasting: 1\nI0511 21:59:23.357732    3655 log.go:172] (0xc0003c7ef0) Reply frame received for 1\nI0511 21:59:23.357788    3655 log.go:172] (0xc0003c7ef0) (0xc00062f360) Create stream\nI0511 21:59:23.357817    3655 log.go:172] (0xc0003c7ef0) (0xc00062f360) Stream added, broadcasting: 3\nI0511 21:59:23.359069    3655 log.go:172] (0xc0003c7ef0) Reply frame received for 3\nI0511 21:59:23.359136    3655 log.go:172] (0xc0003c7ef0) (0xc00063a280) Create stream\nI0511 21:59:23.359157    3655 log.go:172] (0xc0003c7ef0) (0xc00063a280) Stream added, broadcasting: 5\nI0511 21:59:23.360404    3655 log.go:172] (0xc0003c7ef0) Reply frame received for 5\nI0511 21:59:23.435390    3655 log.go:172] (0xc0003c7ef0) Data frame received for 3\nI0511 21:59:23.435414    3655 log.go:172] (0xc00062f360) (3) Data frame handling\nI0511 21:59:23.435459    3655 log.go:172] (0xc0003c7ef0) Data frame received for 5\nI0511 21:59:23.435488    3655 log.go:172] (0xc00063a280) (5) Data frame handling\nI0511 21:59:23.435526    3655 log.go:172] (0xc00063a280) (5) Data frame sent\nI0511 21:59:23.435551    3655 log.go:172] (0xc0003c7ef0) Data frame received for 5\nI0511 21:59:23.435571    3655 log.go:172] (0xc00063a280) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.15 32301\nConnection to 172.17.0.15 32301 port [tcp/32301] succeeded!\nI0511 21:59:23.437373    3655 log.go:172] (0xc0003c7ef0) Data frame received for 1\nI0511 21:59:23.437399    3655 log.go:172] (0xc00063a1e0) (1) Data frame handling\nI0511 21:59:23.437426    3655 log.go:172] (0xc00063a1e0) (1) Data frame sent\nI0511 21:59:23.437472    3655 log.go:172] (0xc0003c7ef0) (0xc00063a1e0) Stream removed, broadcasting: 1\nI0511 21:59:23.437504    3655 log.go:172] (0xc0003c7ef0) Go away received\nI0511 21:59:23.437975    3655 log.go:172] (0xc0003c7ef0) (0xc00063a1e0) Stream removed, broadcasting: 1\nI0511 21:59:23.438000    3655 log.go:172] (0xc0003c7ef0) (0xc00062f360) Stream removed, broadcasting: 3\nI0511 21:59:23.438012    3655 log.go:172] (0xc0003c7ef0) (0xc00063a280) Stream removed, broadcasting: 5\n"
May 11 21:59:23.442: INFO: stdout: ""
May 11 21:59:23.442: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-9992 execpodqbmvk -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 32301'
May 11 21:59:23.664: INFO: stderr: "I0511 21:59:23.589407    3678 log.go:172] (0xc0009d2000) (0xc00099c000) Create stream\nI0511 21:59:23.589470    3678 log.go:172] (0xc0009d2000) (0xc00099c000) Stream added, broadcasting: 1\nI0511 21:59:23.592483    3678 log.go:172] (0xc0009d2000) Reply frame received for 1\nI0511 21:59:23.592533    3678 log.go:172] (0xc0009d2000) (0xc0005e0000) Create stream\nI0511 21:59:23.592546    3678 log.go:172] (0xc0009d2000) (0xc0005e0000) Stream added, broadcasting: 3\nI0511 21:59:23.593855    3678 log.go:172] (0xc0009d2000) Reply frame received for 3\nI0511 21:59:23.593913    3678 log.go:172] (0xc0009d2000) (0xc0007cb540) Create stream\nI0511 21:59:23.593927    3678 log.go:172] (0xc0009d2000) (0xc0007cb540) Stream added, broadcasting: 5\nI0511 21:59:23.595146    3678 log.go:172] (0xc0009d2000) Reply frame received for 5\nI0511 21:59:23.658713    3678 log.go:172] (0xc0009d2000) Data frame received for 3\nI0511 21:59:23.658781    3678 log.go:172] (0xc0005e0000) (3) Data frame handling\nI0511 21:59:23.658821    3678 log.go:172] (0xc0009d2000) Data frame received for 5\nI0511 21:59:23.658841    3678 log.go:172] (0xc0007cb540) (5) Data frame handling\nI0511 21:59:23.658868    3678 log.go:172] (0xc0007cb540) (5) Data frame sent\nI0511 21:59:23.658883    3678 log.go:172] (0xc0009d2000) Data frame received for 5\nI0511 21:59:23.658895    3678 log.go:172] (0xc0007cb540) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.18 32301\nConnection to 172.17.0.18 32301 port [tcp/32301] succeeded!\nI0511 21:59:23.660386    3678 log.go:172] (0xc0009d2000) Data frame received for 1\nI0511 21:59:23.660404    3678 log.go:172] (0xc00099c000) (1) Data frame handling\nI0511 21:59:23.660414    3678 log.go:172] (0xc00099c000) (1) Data frame sent\nI0511 21:59:23.660425    3678 log.go:172] (0xc0009d2000) (0xc00099c000) Stream removed, broadcasting: 1\nI0511 21:59:23.660652    3678 log.go:172] (0xc0009d2000) Go away received\nI0511 21:59:23.660704    3678 log.go:172] (0xc0009d2000) (0xc00099c000) Stream removed, broadcasting: 1\nI0511 21:59:23.660720    3678 log.go:172] (0xc0009d2000) (0xc0005e0000) Stream removed, broadcasting: 3\nI0511 21:59:23.660728    3678 log.go:172] (0xc0009d2000) (0xc0007cb540) Stream removed, broadcasting: 5\n"
May 11 21:59:23.664: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:59:23.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9992" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:19.749 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":226,"skipped":3814,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:59:23.673: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:59:29.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1924" for this suite.

• [SLOW TEST:5.582 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":227,"skipped":3875,"failed":0}
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:59:29.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-projected-vtqb
STEP: Creating a pod to test atomic-volume-subpath
May 11 21:59:29.773: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-vtqb" in namespace "subpath-4580" to be "Succeeded or Failed"
May 11 21:59:29.888: INFO: Pod "pod-subpath-test-projected-vtqb": Phase="Pending", Reason="", readiness=false. Elapsed: 115.705847ms
May 11 21:59:31.920: INFO: Pod "pod-subpath-test-projected-vtqb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147812784s
May 11 21:59:33.923: INFO: Pod "pod-subpath-test-projected-vtqb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15063192s
May 11 21:59:35.926: INFO: Pod "pod-subpath-test-projected-vtqb": Phase="Running", Reason="", readiness=true. Elapsed: 6.153483437s
May 11 21:59:37.978: INFO: Pod "pod-subpath-test-projected-vtqb": Phase="Running", Reason="", readiness=true. Elapsed: 8.205252921s
May 11 21:59:39.982: INFO: Pod "pod-subpath-test-projected-vtqb": Phase="Running", Reason="", readiness=true. Elapsed: 10.20986124s
May 11 21:59:41.986: INFO: Pod "pod-subpath-test-projected-vtqb": Phase="Running", Reason="", readiness=true. Elapsed: 12.213852985s
May 11 21:59:44.178: INFO: Pod "pod-subpath-test-projected-vtqb": Phase="Running", Reason="", readiness=true. Elapsed: 14.405690988s
May 11 21:59:46.183: INFO: Pod "pod-subpath-test-projected-vtqb": Phase="Running", Reason="", readiness=true. Elapsed: 16.410204831s
May 11 21:59:48.188: INFO: Pod "pod-subpath-test-projected-vtqb": Phase="Running", Reason="", readiness=true. Elapsed: 18.415214978s
May 11 21:59:50.191: INFO: Pod "pod-subpath-test-projected-vtqb": Phase="Running", Reason="", readiness=true. Elapsed: 20.418120652s
May 11 21:59:52.267: INFO: Pod "pod-subpath-test-projected-vtqb": Phase="Running", Reason="", readiness=true. Elapsed: 22.494556342s
May 11 21:59:54.272: INFO: Pod "pod-subpath-test-projected-vtqb": Phase="Running", Reason="", readiness=true. Elapsed: 24.499333763s
May 11 21:59:56.276: INFO: Pod "pod-subpath-test-projected-vtqb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.502956137s
STEP: Saw pod success
May 11 21:59:56.276: INFO: Pod "pod-subpath-test-projected-vtqb" satisfied condition "Succeeded or Failed"
May 11 21:59:56.278: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-projected-vtqb container test-container-subpath-projected-vtqb: 
STEP: delete the pod
May 11 21:59:56.351: INFO: Waiting for pod pod-subpath-test-projected-vtqb to disappear
May 11 21:59:56.357: INFO: Pod pod-subpath-test-projected-vtqb no longer exists
STEP: Deleting pod pod-subpath-test-projected-vtqb
May 11 21:59:56.357: INFO: Deleting pod "pod-subpath-test-projected-vtqb" in namespace "subpath-4580"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 21:59:56.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4580" for this suite.

• [SLOW TEST:27.109 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":228,"skipped":3877,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 21:59:56.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 11 21:59:56.426: INFO: Waiting up to 5m0s for pod "downwardapi-volume-39201a40-2663-431d-90c8-5b5b2bd36259" in namespace "downward-api-2446" to be "Succeeded or Failed"
May 11 21:59:56.492: INFO: Pod "downwardapi-volume-39201a40-2663-431d-90c8-5b5b2bd36259": Phase="Pending", Reason="", readiness=false. Elapsed: 66.285825ms
May 11 21:59:58.495: INFO: Pod "downwardapi-volume-39201a40-2663-431d-90c8-5b5b2bd36259": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069421646s
May 11 22:00:00.500: INFO: Pod "downwardapi-volume-39201a40-2663-431d-90c8-5b5b2bd36259": Phase="Running", Reason="", readiness=true. Elapsed: 4.074046296s
May 11 22:00:02.535: INFO: Pod "downwardapi-volume-39201a40-2663-431d-90c8-5b5b2bd36259": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.108727665s
STEP: Saw pod success
May 11 22:00:02.535: INFO: Pod "downwardapi-volume-39201a40-2663-431d-90c8-5b5b2bd36259" satisfied condition "Succeeded or Failed"
May 11 22:00:02.538: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-39201a40-2663-431d-90c8-5b5b2bd36259 container client-container: 
STEP: delete the pod
May 11 22:00:02.687: INFO: Waiting for pod downwardapi-volume-39201a40-2663-431d-90c8-5b5b2bd36259 to disappear
May 11 22:00:02.741: INFO: Pod downwardapi-volume-39201a40-2663-431d-90c8-5b5b2bd36259 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:00:02.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2446" for this suite.

• [SLOW TEST:6.385 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":229,"skipped":3895,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:00:02.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
May 11 22:00:03.156: INFO: Waiting up to 5m0s for pod "downward-api-33a504b2-ed9a-415d-99fb-f14ee4f04f84" in namespace "downward-api-4596" to be "Succeeded or Failed"
May 11 22:00:03.259: INFO: Pod "downward-api-33a504b2-ed9a-415d-99fb-f14ee4f04f84": Phase="Pending", Reason="", readiness=false. Elapsed: 102.853906ms
May 11 22:00:05.518: INFO: Pod "downward-api-33a504b2-ed9a-415d-99fb-f14ee4f04f84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.361966362s
May 11 22:00:07.523: INFO: Pod "downward-api-33a504b2-ed9a-415d-99fb-f14ee4f04f84": Phase="Running", Reason="", readiness=true. Elapsed: 4.366647506s
May 11 22:00:09.532: INFO: Pod "downward-api-33a504b2-ed9a-415d-99fb-f14ee4f04f84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.375600212s
STEP: Saw pod success
May 11 22:00:09.532: INFO: Pod "downward-api-33a504b2-ed9a-415d-99fb-f14ee4f04f84" satisfied condition "Succeeded or Failed"
May 11 22:00:09.843: INFO: Trying to get logs from node kali-worker pod downward-api-33a504b2-ed9a-415d-99fb-f14ee4f04f84 container dapi-container: 
STEP: delete the pod
May 11 22:00:10.306: INFO: Waiting for pod downward-api-33a504b2-ed9a-415d-99fb-f14ee4f04f84 to disappear
May 11 22:00:10.493: INFO: Pod downward-api-33a504b2-ed9a-415d-99fb-f14ee4f04f84 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:00:10.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4596" for this suite.

• [SLOW TEST:7.996 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":230,"skipped":3952,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:00:10.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-974.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-974.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-974.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-974.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-974.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-974.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-974.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-974.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-974.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-974.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May 11 22:00:21.583: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:21.585: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:21.587: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:21.589: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:21.595: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:21.598: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:21.600: INFO: Unable to read jessie_udp@dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:21.602: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:21.606: INFO: Lookups using dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local wheezy_udp@dns-test-service-2.dns-974.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-974.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local jessie_udp@dns-test-service-2.dns-974.svc.cluster.local jessie_tcp@dns-test-service-2.dns-974.svc.cluster.local]

May 11 22:00:26.611: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:26.615: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:26.619: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:26.622: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:26.631: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:26.634: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:26.636: INFO: Unable to read jessie_udp@dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:26.638: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:26.643: INFO: Lookups using dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local wheezy_udp@dns-test-service-2.dns-974.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-974.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local jessie_udp@dns-test-service-2.dns-974.svc.cluster.local jessie_tcp@dns-test-service-2.dns-974.svc.cluster.local]

May 11 22:00:31.806: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:31.809: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:31.846: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:32.032: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:32.106: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:32.109: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:32.112: INFO: Unable to read jessie_udp@dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:32.115: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:32.123: INFO: Lookups using dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local wheezy_udp@dns-test-service-2.dns-974.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-974.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local jessie_udp@dns-test-service-2.dns-974.svc.cluster.local jessie_tcp@dns-test-service-2.dns-974.svc.cluster.local]

May 11 22:00:36.632: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:36.819: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:36.991: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:36.994: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:37.002: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:37.005: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:37.007: INFO: Unable to read jessie_udp@dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:37.010: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:37.014: INFO: Lookups using dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local wheezy_udp@dns-test-service-2.dns-974.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-974.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local jessie_udp@dns-test-service-2.dns-974.svc.cluster.local jessie_tcp@dns-test-service-2.dns-974.svc.cluster.local]

May 11 22:00:41.613: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:41.616: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:41.619: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:41.621: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:41.633: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:41.636: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:41.638: INFO: Unable to read jessie_udp@dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:41.640: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:41.644: INFO: Lookups using dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local wheezy_udp@dns-test-service-2.dns-974.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-974.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local jessie_udp@dns-test-service-2.dns-974.svc.cluster.local jessie_tcp@dns-test-service-2.dns-974.svc.cluster.local]

May 11 22:00:46.611: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:46.615: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:46.618: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:46.621: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:46.631: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:46.634: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:46.637: INFO: Unable to read jessie_udp@dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:46.640: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-974.svc.cluster.local from pod dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc: the server could not find the requested resource (get pods dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc)
May 11 22:00:46.646: INFO: Lookups using dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local wheezy_udp@dns-test-service-2.dns-974.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-974.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-974.svc.cluster.local jessie_udp@dns-test-service-2.dns-974.svc.cluster.local jessie_tcp@dns-test-service-2.dns-974.svc.cluster.local]

May 11 22:00:52.261: INFO: DNS probes using dns-974/dns-test-f71c176c-43bb-4441-bfa0-d345bfb886fc succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:00:52.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-974" for this suite.

• [SLOW TEST:42.330 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":231,"skipped":4002,"failed":0}
SSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:00:53.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test hostPath mode
May 11 22:00:54.207: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3412" to be "Succeeded or Failed"
May 11 22:00:55.006: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 799.201014ms
May 11 22:00:57.011: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.803876037s
May 11 22:00:59.499: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 5.292249287s
May 11 22:01:01.657: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 7.450475569s
May 11 22:01:03.888: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.681092449s
May 11 22:01:05.894: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.686774667s
May 11 22:01:08.272: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.065027318s
STEP: Saw pod success
May 11 22:01:08.272: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
May 11 22:01:08.274: INFO: Trying to get logs from node kali-worker pod pod-host-path-test container test-container-1: 
STEP: delete the pod
May 11 22:01:08.480: INFO: Waiting for pod pod-host-path-test to disappear
May 11 22:01:08.522: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:01:08.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-3412" for this suite.

• [SLOW TEST:15.452 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":232,"skipped":4007,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:01:08.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
May 11 22:01:08.876: INFO: Waiting up to 5m0s for pod "pod-681199c1-6b04-4f98-ba0a-a021a5947771" in namespace "emptydir-5933" to be "Succeeded or Failed"
May 11 22:01:08.886: INFO: Pod "pod-681199c1-6b04-4f98-ba0a-a021a5947771": Phase="Pending", Reason="", readiness=false. Elapsed: 10.06319ms
May 11 22:01:10.890: INFO: Pod "pod-681199c1-6b04-4f98-ba0a-a021a5947771": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01348997s
May 11 22:01:12.893: INFO: Pod "pod-681199c1-6b04-4f98-ba0a-a021a5947771": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017157737s
May 11 22:01:15.080: INFO: Pod "pod-681199c1-6b04-4f98-ba0a-a021a5947771": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.203540888s
STEP: Saw pod success
May 11 22:01:15.080: INFO: Pod "pod-681199c1-6b04-4f98-ba0a-a021a5947771" satisfied condition "Succeeded or Failed"
May 11 22:01:15.130: INFO: Trying to get logs from node kali-worker pod pod-681199c1-6b04-4f98-ba0a-a021a5947771 container test-container: 
STEP: delete the pod
May 11 22:01:15.249: INFO: Waiting for pod pod-681199c1-6b04-4f98-ba0a-a021a5947771 to disappear
May 11 22:01:15.270: INFO: Pod pod-681199c1-6b04-4f98-ba0a-a021a5947771 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:01:15.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5933" for this suite.

• [SLOW TEST:6.745 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":233,"skipped":4058,"failed":0}
SSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:01:15.276: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
May 11 22:01:26.357: INFO: Successfully updated pod "adopt-release-b5tzs"
STEP: Checking that the Job readopts the Pod
May 11 22:01:26.357: INFO: Waiting up to 15m0s for pod "adopt-release-b5tzs" in namespace "job-3122" to be "adopted"
May 11 22:01:27.069: INFO: Pod "adopt-release-b5tzs": Phase="Running", Reason="", readiness=true. Elapsed: 711.991297ms
May 11 22:01:29.072: INFO: Pod "adopt-release-b5tzs": Phase="Running", Reason="", readiness=true. Elapsed: 2.71507223s
May 11 22:01:29.072: INFO: Pod "adopt-release-b5tzs" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
May 11 22:01:30.193: INFO: Successfully updated pod "adopt-release-b5tzs"
STEP: Checking that the Job releases the Pod
May 11 22:01:30.193: INFO: Waiting up to 15m0s for pod "adopt-release-b5tzs" in namespace "job-3122" to be "released"
May 11 22:01:30.876: INFO: Pod "adopt-release-b5tzs": Phase="Running", Reason="", readiness=true. Elapsed: 682.780809ms
May 11 22:01:30.876: INFO: Pod "adopt-release-b5tzs" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:01:30.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3122" for this suite.

• [SLOW TEST:16.361 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":234,"skipped":4064,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:01:31.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 22:01:32.117: INFO: Create a RollingUpdate DaemonSet
May 11 22:01:32.120: INFO: Check that daemon pods launch on every node of the cluster
May 11 22:01:32.142: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:01:32.151: INFO: Number of nodes with available pods: 0
May 11 22:01:32.151: INFO: Node kali-worker is running more than one daemon pod
May 11 22:01:33.156: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:01:33.159: INFO: Number of nodes with available pods: 0
May 11 22:01:33.160: INFO: Node kali-worker is running more than one daemon pod
May 11 22:01:34.448: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:01:34.518: INFO: Number of nodes with available pods: 0
May 11 22:01:34.518: INFO: Node kali-worker is running more than one daemon pod
May 11 22:01:35.351: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:01:35.355: INFO: Number of nodes with available pods: 0
May 11 22:01:35.355: INFO: Node kali-worker is running more than one daemon pod
May 11 22:01:36.274: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:01:36.277: INFO: Number of nodes with available pods: 0
May 11 22:01:36.277: INFO: Node kali-worker is running more than one daemon pod
May 11 22:01:37.215: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:01:37.219: INFO: Number of nodes with available pods: 0
May 11 22:01:37.219: INFO: Node kali-worker is running more than one daemon pod
May 11 22:01:38.176: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:01:38.224: INFO: Number of nodes with available pods: 0
May 11 22:01:38.224: INFO: Node kali-worker is running more than one daemon pod
May 11 22:01:39.298: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:01:39.338: INFO: Number of nodes with available pods: 2
May 11 22:01:39.338: INFO: Number of running nodes: 2, number of available pods: 2
May 11 22:01:39.338: INFO: Update the DaemonSet to trigger a rollout
May 11 22:01:39.344: INFO: Updating DaemonSet daemon-set
May 11 22:01:54.374: INFO: Roll back the DaemonSet before rollout is complete
May 11 22:01:54.379: INFO: Updating DaemonSet daemon-set
May 11 22:01:54.379: INFO: Make sure DaemonSet rollback is complete
May 11 22:01:54.452: INFO: Wrong image for pod: daemon-set-jzd79. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
May 11 22:01:54.452: INFO: Pod daemon-set-jzd79 is not available
May 11 22:01:54.480: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:01:55.536: INFO: Wrong image for pod: daemon-set-jzd79. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
May 11 22:01:55.536: INFO: Pod daemon-set-jzd79 is not available
May 11 22:01:55.539: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:01:56.484: INFO: Wrong image for pod: daemon-set-jzd79. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
May 11 22:01:56.484: INFO: Pod daemon-set-jzd79 is not available
May 11 22:01:56.487: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:01:57.716: INFO: Wrong image for pod: daemon-set-jzd79. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
May 11 22:01:57.716: INFO: Pod daemon-set-jzd79 is not available
May 11 22:01:57.980: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:01:58.527: INFO: Pod daemon-set-rx5lj is not available
May 11 22:01:58.530: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1531, will wait for the garbage collector to delete the pods
May 11 22:01:58.638: INFO: Deleting DaemonSet.extensions daemon-set took: 53.195918ms
May 11 22:01:59.139: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.225207ms
May 11 22:02:14.141: INFO: Number of nodes with available pods: 0
May 11 22:02:14.141: INFO: Number of running nodes: 0, number of available pods: 0
May 11 22:02:15.010: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1531/daemonsets","resourceVersion":"3532307"},"items":null}

May 11 22:02:15.012: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1531/pods","resourceVersion":"3532307"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:02:15.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1531" for this suite.

• [SLOW TEST:43.957 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":235,"skipped":4119,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:02:15.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
May 11 22:02:17.454: INFO: Waiting up to 5m0s for pod "pod-8ed9ef34-4457-45ef-82bf-0e71d313f947" in namespace "emptydir-7342" to be "Succeeded or Failed"
May 11 22:02:17.979: INFO: Pod "pod-8ed9ef34-4457-45ef-82bf-0e71d313f947": Phase="Pending", Reason="", readiness=false. Elapsed: 525.518325ms
May 11 22:02:20.112: INFO: Pod "pod-8ed9ef34-4457-45ef-82bf-0e71d313f947": Phase="Pending", Reason="", readiness=false. Elapsed: 2.658731579s
May 11 22:02:22.800: INFO: Pod "pod-8ed9ef34-4457-45ef-82bf-0e71d313f947": Phase="Pending", Reason="", readiness=false. Elapsed: 5.346476306s
May 11 22:02:25.428: INFO: Pod "pod-8ed9ef34-4457-45ef-82bf-0e71d313f947": Phase="Pending", Reason="", readiness=false. Elapsed: 7.974628149s
May 11 22:02:27.998: INFO: Pod "pod-8ed9ef34-4457-45ef-82bf-0e71d313f947": Phase="Pending", Reason="", readiness=false. Elapsed: 10.544260905s
May 11 22:02:30.764: INFO: Pod "pod-8ed9ef34-4457-45ef-82bf-0e71d313f947": Phase="Running", Reason="", readiness=true. Elapsed: 13.310232308s
May 11 22:02:33.064: INFO: Pod "pod-8ed9ef34-4457-45ef-82bf-0e71d313f947": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.610078064s
STEP: Saw pod success
May 11 22:02:33.064: INFO: Pod "pod-8ed9ef34-4457-45ef-82bf-0e71d313f947" satisfied condition "Succeeded or Failed"
May 11 22:02:33.067: INFO: Trying to get logs from node kali-worker pod pod-8ed9ef34-4457-45ef-82bf-0e71d313f947 container test-container: 
STEP: delete the pod
May 11 22:02:35.263: INFO: Waiting for pod pod-8ed9ef34-4457-45ef-82bf-0e71d313f947 to disappear
May 11 22:02:36.061: INFO: Pod pod-8ed9ef34-4457-45ef-82bf-0e71d313f947 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:02:36.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7342" for this suite.

• [SLOW TEST:21.258 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":236,"skipped":4122,"failed":0}
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:02:36.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
May 11 22:02:49.554: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:02:50.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9067" for this suite.

• [SLOW TEST:13.898 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":237,"skipped":4122,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:02:50.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
May 11 22:02:51.942: INFO: Waiting up to 5m0s for pod "pod-ca092995-d5b7-4454-a15f-ef7761dded35" in namespace "emptydir-2398" to be "Succeeded or Failed"
May 11 22:02:51.975: INFO: Pod "pod-ca092995-d5b7-4454-a15f-ef7761dded35": Phase="Pending", Reason="", readiness=false. Elapsed: 33.065245ms
May 11 22:02:54.039: INFO: Pod "pod-ca092995-d5b7-4454-a15f-ef7761dded35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097123128s
May 11 22:02:56.042: INFO: Pod "pod-ca092995-d5b7-4454-a15f-ef7761dded35": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100559143s
May 11 22:02:58.395: INFO: Pod "pod-ca092995-d5b7-4454-a15f-ef7761dded35": Phase="Pending", Reason="", readiness=false. Elapsed: 6.452966335s
May 11 22:03:00.398: INFO: Pod "pod-ca092995-d5b7-4454-a15f-ef7761dded35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.456313956s
STEP: Saw pod success
May 11 22:03:00.398: INFO: Pod "pod-ca092995-d5b7-4454-a15f-ef7761dded35" satisfied condition "Succeeded or Failed"
May 11 22:03:00.400: INFO: Trying to get logs from node kali-worker2 pod pod-ca092995-d5b7-4454-a15f-ef7761dded35 container test-container: 
STEP: delete the pod
May 11 22:03:00.641: INFO: Waiting for pod pod-ca092995-d5b7-4454-a15f-ef7761dded35 to disappear
May 11 22:03:00.686: INFO: Pod pod-ca092995-d5b7-4454-a15f-ef7761dded35 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:03:00.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2398" for this suite.

• [SLOW TEST:9.941 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":238,"skipped":4141,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:03:00.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 22:03:01.239: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-10e365fb-7f53-42f6-b2ba-bd393ba335b0" in namespace "security-context-test-4122" to be "Succeeded or Failed"
May 11 22:03:01.278: INFO: Pod "alpine-nnp-false-10e365fb-7f53-42f6-b2ba-bd393ba335b0": Phase="Pending", Reason="", readiness=false. Elapsed: 39.469681ms
May 11 22:03:03.282: INFO: Pod "alpine-nnp-false-10e365fb-7f53-42f6-b2ba-bd393ba335b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043578208s
May 11 22:03:05.286: INFO: Pod "alpine-nnp-false-10e365fb-7f53-42f6-b2ba-bd393ba335b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047390583s
May 11 22:03:07.291: INFO: Pod "alpine-nnp-false-10e365fb-7f53-42f6-b2ba-bd393ba335b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052203574s
May 11 22:03:07.291: INFO: Pod "alpine-nnp-false-10e365fb-7f53-42f6-b2ba-bd393ba335b0" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:03:07.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4122" for this suite.

• [SLOW TEST:6.613 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when creating containers with AllowPrivilegeEscalation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":239,"skipped":4149,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:03:07.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 22:03:07.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
May 11 22:03:09.380: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1285 create -f -'
May 11 22:03:22.495: INFO: stderr: ""
May 11 22:03:22.495: INFO: stdout: "e2e-test-crd-publish-openapi-2270-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
May 11 22:03:22.495: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1285 delete e2e-test-crd-publish-openapi-2270-crds test-cr'
May 11 22:03:22.698: INFO: stderr: ""
May 11 22:03:22.698: INFO: stdout: "e2e-test-crd-publish-openapi-2270-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
May 11 22:03:22.698: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1285 apply -f -'
May 11 22:03:23.107: INFO: stderr: ""
May 11 22:03:23.107: INFO: stdout: "e2e-test-crd-publish-openapi-2270-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
May 11 22:03:23.107: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1285 delete e2e-test-crd-publish-openapi-2270-crds test-cr'
May 11 22:03:23.237: INFO: stderr: ""
May 11 22:03:23.237: INFO: stdout: "e2e-test-crd-publish-openapi-2270-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
May 11 22:03:23.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2270-crds'
May 11 22:03:23.525: INFO: stderr: ""
May 11 22:03:23.525: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2270-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:03:26.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1285" for this suite.

• [SLOW TEST:19.167 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":240,"skipped":4163,"failed":0}
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:03:26.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
May 11 22:03:43.958: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May 11 22:03:44.382: INFO: Pod pod-with-prestop-http-hook still exists
May 11 22:03:46.382: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May 11 22:03:46.386: INFO: Pod pod-with-prestop-http-hook still exists
May 11 22:03:48.382: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May 11 22:03:48.411: INFO: Pod pod-with-prestop-http-hook still exists
May 11 22:03:50.382: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May 11 22:03:50.386: INFO: Pod pod-with-prestop-http-hook still exists
May 11 22:03:52.382: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May 11 22:03:52.579: INFO: Pod pod-with-prestop-http-hook still exists
May 11 22:03:54.382: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May 11 22:03:55.011: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:03:55.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7977" for this suite.

• [SLOW TEST:28.840 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":241,"skipped":4164,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:03:55.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating pod
May 11 22:04:06.042: INFO: Pod pod-hostip-5cc4f9c1-585f-47db-9f45-a34c0203ff4b has hostIP: 172.17.0.18
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:04:06.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1531" for this suite.

• [SLOW TEST:10.760 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":242,"skipped":4222,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:04:06.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 11 22:04:06.156: INFO: Waiting up to 5m0s for pod "downwardapi-volume-20ccb98a-4a2b-4e15-ae6a-6158cca43d18" in namespace "projected-1049" to be "Succeeded or Failed"
May 11 22:04:06.207: INFO: Pod "downwardapi-volume-20ccb98a-4a2b-4e15-ae6a-6158cca43d18": Phase="Pending", Reason="", readiness=false. Elapsed: 51.591254ms
May 11 22:04:08.212: INFO: Pod "downwardapi-volume-20ccb98a-4a2b-4e15-ae6a-6158cca43d18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056032251s
May 11 22:04:10.232: INFO: Pod "downwardapi-volume-20ccb98a-4a2b-4e15-ae6a-6158cca43d18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076236017s
May 11 22:04:12.336: INFO: Pod "downwardapi-volume-20ccb98a-4a2b-4e15-ae6a-6158cca43d18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.180237561s
STEP: Saw pod success
May 11 22:04:12.336: INFO: Pod "downwardapi-volume-20ccb98a-4a2b-4e15-ae6a-6158cca43d18" satisfied condition "Succeeded or Failed"
May 11 22:04:12.341: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-20ccb98a-4a2b-4e15-ae6a-6158cca43d18 container client-container: 
STEP: delete the pod
May 11 22:04:13.173: INFO: Waiting for pod downwardapi-volume-20ccb98a-4a2b-4e15-ae6a-6158cca43d18 to disappear
May 11 22:04:13.387: INFO: Pod downwardapi-volume-20ccb98a-4a2b-4e15-ae6a-6158cca43d18 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:04:13.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1049" for this suite.

• [SLOW TEST:8.056 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":243,"skipped":4236,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:04:14.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0511 22:04:29.905499       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May 11 22:04:29.905: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:04:29.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1288" for this suite.

• [SLOW TEST:16.546 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":244,"skipped":4253,"failed":0}
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:04:30.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 22:04:32.071: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
May 11 22:04:32.227: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:04:32.229: INFO: Number of nodes with available pods: 0
May 11 22:04:32.229: INFO: Node kali-worker is running more than one daemon pod
May 11 22:04:33.234: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:04:33.236: INFO: Number of nodes with available pods: 0
May 11 22:04:33.236: INFO: Node kali-worker is running more than one daemon pod
May 11 22:04:34.234: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:04:34.238: INFO: Number of nodes with available pods: 0
May 11 22:04:34.238: INFO: Node kali-worker is running more than one daemon pod
May 11 22:04:35.235: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:04:35.239: INFO: Number of nodes with available pods: 0
May 11 22:04:35.239: INFO: Node kali-worker is running more than one daemon pod
May 11 22:04:36.437: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:04:36.451: INFO: Number of nodes with available pods: 0
May 11 22:04:36.451: INFO: Node kali-worker is running more than one daemon pod
May 11 22:04:37.261: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:04:37.326: INFO: Number of nodes with available pods: 0
May 11 22:04:37.326: INFO: Node kali-worker is running more than one daemon pod
May 11 22:04:38.772: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:04:38.939: INFO: Number of nodes with available pods: 1
May 11 22:04:38.939: INFO: Node kali-worker is running more than one daemon pod
May 11 22:04:39.357: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:04:39.567: INFO: Number of nodes with available pods: 2
May 11 22:04:39.567: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
May 11 22:04:39.819: INFO: Wrong image for pod: daemon-set-w5mh5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:39.819: INFO: Wrong image for pod: daemon-set-z88th. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:39.824: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:04:40.915: INFO: Wrong image for pod: daemon-set-w5mh5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:40.915: INFO: Wrong image for pod: daemon-set-z88th. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:41.065: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:04:41.846: INFO: Wrong image for pod: daemon-set-w5mh5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:41.846: INFO: Pod daemon-set-w5mh5 is not available
May 11 22:04:41.846: INFO: Wrong image for pod: daemon-set-z88th. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:41.964: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:04:42.827: INFO: Wrong image for pod: daemon-set-w5mh5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:42.827: INFO: Pod daemon-set-w5mh5 is not available
May 11 22:04:42.827: INFO: Wrong image for pod: daemon-set-z88th. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:42.830: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:04:43.828: INFO: Wrong image for pod: daemon-set-w5mh5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:43.828: INFO: Pod daemon-set-w5mh5 is not available
May 11 22:04:43.828: INFO: Wrong image for pod: daemon-set-z88th. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:43.831: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:04:44.827: INFO: Wrong image for pod: daemon-set-w5mh5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:44.827: INFO: Pod daemon-set-w5mh5 is not available
May 11 22:04:44.827: INFO: Wrong image for pod: daemon-set-z88th. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:44.830: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:04:45.834: INFO: Wrong image for pod: daemon-set-w5mh5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:45.834: INFO: Pod daemon-set-w5mh5 is not available
May 11 22:04:45.834: INFO: Wrong image for pod: daemon-set-z88th. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:45.842: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:04:46.829: INFO: Wrong image for pod: daemon-set-w5mh5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:46.829: INFO: Pod daemon-set-w5mh5 is not available
May 11 22:04:46.829: INFO: Wrong image for pod: daemon-set-z88th. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:46.834: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:04:47.830: INFO: Wrong image for pod: daemon-set-w5mh5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:47.830: INFO: Pod daemon-set-w5mh5 is not available
May 11 22:04:47.830: INFO: Wrong image for pod: daemon-set-z88th. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:47.834: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:04:48.830: INFO: Wrong image for pod: daemon-set-w5mh5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:48.830: INFO: Pod daemon-set-w5mh5 is not available
May 11 22:04:48.830: INFO: Wrong image for pod: daemon-set-z88th. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:48.835: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:04:49.830: INFO: Wrong image for pod: daemon-set-w5mh5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:49.830: INFO: Pod daemon-set-w5mh5 is not available
May 11 22:04:49.830: INFO: Wrong image for pod: daemon-set-z88th. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:49.834: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:04:50.829: INFO: Wrong image for pod: daemon-set-w5mh5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:50.829: INFO: Pod daemon-set-w5mh5 is not available
May 11 22:04:50.829: INFO: Wrong image for pod: daemon-set-z88th. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:50.833: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:04:51.842: INFO: Wrong image for pod: daemon-set-w5mh5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:51.842: INFO: Pod daemon-set-w5mh5 is not available
May 11 22:04:51.842: INFO: Wrong image for pod: daemon-set-z88th. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:51.845: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:04:52.828: INFO: Wrong image for pod: daemon-set-w5mh5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:52.828: INFO: Pod daemon-set-w5mh5 is not available
May 11 22:04:52.828: INFO: Wrong image for pod: daemon-set-z88th. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:52.833: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:04:53.828: INFO: Pod daemon-set-w6pnk is not available
May 11 22:04:53.829: INFO: Wrong image for pod: daemon-set-z88th. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:53.832: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:04:54.828: INFO: Pod daemon-set-w6pnk is not available
May 11 22:04:54.828: INFO: Wrong image for pod: daemon-set-z88th. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:54.832: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:04:55.829: INFO: Pod daemon-set-w6pnk is not available
May 11 22:04:55.829: INFO: Wrong image for pod: daemon-set-z88th. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:55.833: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:04:57.065: INFO: Wrong image for pod: daemon-set-z88th. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:57.281: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:04:57.866: INFO: Wrong image for pod: daemon-set-z88th. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:57.932: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:04:58.828: INFO: Wrong image for pod: daemon-set-z88th. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:04:58.828: INFO: Pod daemon-set-z88th is not available
May 11 22:04:58.832: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:05:00.617: INFO: Wrong image for pod: daemon-set-z88th. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:05:00.617: INFO: Pod daemon-set-z88th is not available
May 11 22:05:00.620: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:05:01.035: INFO: Wrong image for pod: daemon-set-z88th. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:05:01.035: INFO: Pod daemon-set-z88th is not available
May 11 22:05:01.038: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:05:01.829: INFO: Wrong image for pod: daemon-set-z88th. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:05:01.829: INFO: Pod daemon-set-z88th is not available
May 11 22:05:01.833: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:05:02.829: INFO: Wrong image for pod: daemon-set-z88th. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:05:02.829: INFO: Pod daemon-set-z88th is not available
May 11 22:05:02.832: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:05:04.048: INFO: Wrong image for pod: daemon-set-z88th. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 11 22:05:04.048: INFO: Pod daemon-set-z88th is not available
May 11 22:05:04.311: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:05:05.125: INFO: Pod daemon-set-f477r is not available
May 11 22:05:05.129: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
May 11 22:05:05.178: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:05:05.523: INFO: Number of nodes with available pods: 1
May 11 22:05:05.523: INFO: Node kali-worker is running more than one daemon pod
May 11 22:05:06.529: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:05:06.533: INFO: Number of nodes with available pods: 1
May 11 22:05:06.533: INFO: Node kali-worker is running more than one daemon pod
May 11 22:05:07.839: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:05:07.847: INFO: Number of nodes with available pods: 1
May 11 22:05:07.847: INFO: Node kali-worker is running more than one daemon pod
May 11 22:05:08.743: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:05:08.746: INFO: Number of nodes with available pods: 1
May 11 22:05:08.746: INFO: Node kali-worker is running more than one daemon pod
May 11 22:05:09.528: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:05:09.608: INFO: Number of nodes with available pods: 1
May 11 22:05:09.608: INFO: Node kali-worker is running more than one daemon pod
May 11 22:05:10.689: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:05:10.693: INFO: Number of nodes with available pods: 1
May 11 22:05:10.693: INFO: Node kali-worker is running more than one daemon pod
May 11 22:05:11.772: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 11 22:05:11.774: INFO: Number of nodes with available pods: 2
May 11 22:05:11.774: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7770, will wait for the garbage collector to delete the pods
May 11 22:05:11.843: INFO: Deleting DaemonSet.extensions daemon-set took: 5.746392ms
May 11 22:05:12.243: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.236448ms
May 11 22:05:23.847: INFO: Number of nodes with available pods: 0
May 11 22:05:23.847: INFO: Number of running nodes: 0, number of available pods: 0
May 11 22:05:23.848: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7770/daemonsets","resourceVersion":"3533299"},"items":null}

May 11 22:05:23.850: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7770/pods","resourceVersion":"3533299"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:05:23.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7770" for this suite.

• [SLOW TEST:53.252 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":245,"skipped":4256,"failed":0}
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:05:23.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 22:05:24.053: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:05:25.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9237" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":275,"completed":246,"skipped":4256,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:05:25.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
May 11 22:05:33.297: INFO: Pod name wrapped-volume-race-8e0fb231-8f9a-4eec-8193-a3efd326fcf3: Found 0 pods out of 5
May 11 22:05:38.858: INFO: Pod name wrapped-volume-race-8e0fb231-8f9a-4eec-8193-a3efd326fcf3: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-8e0fb231-8f9a-4eec-8193-a3efd326fcf3 in namespace emptydir-wrapper-894, will wait for the garbage collector to delete the pods
May 11 22:05:57.270: INFO: Deleting ReplicationController wrapped-volume-race-8e0fb231-8f9a-4eec-8193-a3efd326fcf3 took: 417.724955ms
May 11 22:05:57.870: INFO: Terminating ReplicationController wrapped-volume-race-8e0fb231-8f9a-4eec-8193-a3efd326fcf3 pods took: 600.235205ms
STEP: Creating RC which spawns configmap-volume pods
May 11 22:06:15.137: INFO: Pod name wrapped-volume-race-62a6a1d0-f562-47ae-af9b-b19c7c5a5141: Found 0 pods out of 5
May 11 22:06:20.143: INFO: Pod name wrapped-volume-race-62a6a1d0-f562-47ae-af9b-b19c7c5a5141: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-62a6a1d0-f562-47ae-af9b-b19c7c5a5141 in namespace emptydir-wrapper-894, will wait for the garbage collector to delete the pods
May 11 22:06:42.220: INFO: Deleting ReplicationController wrapped-volume-race-62a6a1d0-f562-47ae-af9b-b19c7c5a5141 took: 6.436423ms
May 11 22:06:42.720: INFO: Terminating ReplicationController wrapped-volume-race-62a6a1d0-f562-47ae-af9b-b19c7c5a5141 pods took: 500.258538ms
STEP: Creating RC which spawns configmap-volume pods
May 11 22:07:03.959: INFO: Pod name wrapped-volume-race-0cd1b759-84e9-448d-8750-1deab3f8b9ee: Found 0 pods out of 5
May 11 22:07:08.966: INFO: Pod name wrapped-volume-race-0cd1b759-84e9-448d-8750-1deab3f8b9ee: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-0cd1b759-84e9-448d-8750-1deab3f8b9ee in namespace emptydir-wrapper-894, will wait for the garbage collector to delete the pods
May 11 22:07:27.818: INFO: Deleting ReplicationController wrapped-volume-race-0cd1b759-84e9-448d-8750-1deab3f8b9ee took: 168.721683ms
May 11 22:07:28.718: INFO: Terminating ReplicationController wrapped-volume-race-0cd1b759-84e9-448d-8750-1deab3f8b9ee pods took: 900.22184ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:07:57.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-894" for this suite.

• [SLOW TEST:152.218 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":247,"skipped":4265,"failed":0}
SSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:07:57.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8410.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8410.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8410.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8410.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8410.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8410.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8410.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8410.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8410.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8410.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8410.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 46.206.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.206.46_udp@PTR;check="$$(dig +tcp +noall +answer +search 46.206.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.206.46_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8410.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8410.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8410.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8410.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8410.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8410.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8410.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8410.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8410.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8410.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8410.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 46.206.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.206.46_udp@PTR;check="$$(dig +tcp +noall +answer +search 46.206.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.206.46_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May 11 22:08:12.335: INFO: Unable to read wheezy_udp@dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:12.930: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:13.427: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:13.474: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:17.391: INFO: Unable to read jessie_udp@dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:17.595: INFO: Unable to read jessie_tcp@dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:17.659: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:17.760: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:18.683: INFO: Lookups using dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7 failed for: [wheezy_udp@dns-test-service.dns-8410.svc.cluster.local wheezy_tcp@dns-test-service.dns-8410.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local jessie_udp@dns-test-service.dns-8410.svc.cluster.local jessie_tcp@dns-test-service.dns-8410.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local]

May 11 22:08:23.858: INFO: Unable to read wheezy_udp@dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:24.196: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:24.276: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:24.283: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:24.674: INFO: Unable to read jessie_udp@dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:24.703: INFO: Unable to read jessie_tcp@dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:24.721: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:24.790: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:25.439: INFO: Lookups using dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7 failed for: [wheezy_udp@dns-test-service.dns-8410.svc.cluster.local wheezy_tcp@dns-test-service.dns-8410.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local jessie_udp@dns-test-service.dns-8410.svc.cluster.local jessie_tcp@dns-test-service.dns-8410.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local]

May 11 22:08:28.922: INFO: Unable to read wheezy_udp@dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:28.925: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:28.959: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:28.961: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:29.249: INFO: Unable to read jessie_udp@dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:29.250: INFO: Unable to read jessie_tcp@dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:29.252: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:29.254: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:29.871: INFO: Lookups using dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7 failed for: [wheezy_udp@dns-test-service.dns-8410.svc.cluster.local wheezy_tcp@dns-test-service.dns-8410.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local jessie_udp@dns-test-service.dns-8410.svc.cluster.local jessie_tcp@dns-test-service.dns-8410.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local]

May 11 22:08:33.687: INFO: Unable to read wheezy_udp@dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:33.690: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:33.693: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:33.695: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:33.712: INFO: Unable to read jessie_udp@dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:33.714: INFO: Unable to read jessie_tcp@dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:33.716: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:33.719: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local from pod dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7: the server could not find the requested resource (get pods dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7)
May 11 22:08:33.762: INFO: Lookups using dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7 failed for: [wheezy_udp@dns-test-service.dns-8410.svc.cluster.local wheezy_tcp@dns-test-service.dns-8410.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local jessie_udp@dns-test-service.dns-8410.svc.cluster.local jessie_tcp@dns-test-service.dns-8410.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8410.svc.cluster.local]

May 11 22:08:38.798: INFO: DNS probes using dns-8410/dns-test-34b8a65c-e78b-4955-9199-a41c834d9aa7 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:08:39.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8410" for this suite.

• [SLOW TEST:42.070 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":275,"completed":248,"skipped":4268,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:08:39.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 11 22:08:39.691: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a6c5abed-ef17-4583-8731-87fff3d252ea" in namespace "downward-api-6186" to be "Succeeded or Failed"
May 11 22:08:39.754: INFO: Pod "downwardapi-volume-a6c5abed-ef17-4583-8731-87fff3d252ea": Phase="Pending", Reason="", readiness=false. Elapsed: 63.735496ms
May 11 22:08:41.899: INFO: Pod "downwardapi-volume-a6c5abed-ef17-4583-8731-87fff3d252ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208030258s
May 11 22:08:43.902: INFO: Pod "downwardapi-volume-a6c5abed-ef17-4583-8731-87fff3d252ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.211555924s
May 11 22:08:46.263: INFO: Pod "downwardapi-volume-a6c5abed-ef17-4583-8731-87fff3d252ea": Phase="Running", Reason="", readiness=true. Elapsed: 6.572462844s
May 11 22:08:48.326: INFO: Pod "downwardapi-volume-a6c5abed-ef17-4583-8731-87fff3d252ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.63570833s
STEP: Saw pod success
May 11 22:08:48.326: INFO: Pod "downwardapi-volume-a6c5abed-ef17-4583-8731-87fff3d252ea" satisfied condition "Succeeded or Failed"
May 11 22:08:48.329: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-a6c5abed-ef17-4583-8731-87fff3d252ea container client-container: 
STEP: delete the pod
May 11 22:08:48.491: INFO: Waiting for pod downwardapi-volume-a6c5abed-ef17-4583-8731-87fff3d252ea to disappear
May 11 22:08:48.530: INFO: Pod downwardapi-volume-a6c5abed-ef17-4583-8731-87fff3d252ea no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:08:48.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6186" for this suite.

• [SLOW TEST:8.996 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":249,"skipped":4302,"failed":0}
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:08:48.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
May 11 22:08:48.792: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
May 11 22:08:48.801: INFO: Waiting for terminating namespaces to be deleted...
May 11 22:08:48.803: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
May 11 22:08:48.807: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 11 22:08:48.807: INFO: 	Container kindnet-cni ready: true, restart count 1
May 11 22:08:48.807: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 11 22:08:48.807: INFO: 	Container kube-proxy ready: true, restart count 0
May 11 22:08:48.807: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
May 11 22:08:48.820: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 11 22:08:48.820: INFO: 	Container kube-proxy ready: true, restart count 0
May 11 22:08:48.820: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 11 22:08:48.820: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-28671901-d96d-4657-9b2c-07f884b501d7 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-28671901-d96d-4657-9b2c-07f884b501d7 off the node kali-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-28671901-d96d-4657-9b2c-07f884b501d7
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:13:57.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1194" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:309.185 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":250,"skipped":4306,"failed":0}
SS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:13:57.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-031f0571-4457-442a-b639-457836158c23
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-031f0571-4457-442a-b639-457836158c23
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:15:31.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6807" for this suite.

• [SLOW TEST:93.648 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":251,"skipped":4308,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:15:31.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
May 11 22:15:31.824: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the sample API server.
May 11 22:15:33.038: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
May 11 22:15:36.764: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832133, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832133, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832133, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832133, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 22:15:39.129: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832133, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832133, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832133, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832133, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 22:15:40.806: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832133, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832133, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832133, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832133, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 22:15:43.396: INFO: Waited 622.906687ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:15:47.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-4352" for this suite.

• [SLOW TEST:16.495 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":252,"skipped":4328,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:15:47.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
May 11 22:15:49.175: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-978 /api/v1/namespaces/watch-978/configmaps/e2e-watch-test-label-changed 11bb7297-9d68-46c5-a50e-5be2fc37c01d 3536070 0 2020-05-11 22:15:48 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-05-11 22:15:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May 11 22:15:49.175: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-978 /api/v1/namespaces/watch-978/configmaps/e2e-watch-test-label-changed 11bb7297-9d68-46c5-a50e-5be2fc37c01d 3536071 0 2020-05-11 22:15:48 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-05-11 22:15:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
May 11 22:15:49.175: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-978 /api/v1/namespaces/watch-978/configmaps/e2e-watch-test-label-changed 11bb7297-9d68-46c5-a50e-5be2fc37c01d 3536072 0 2020-05-11 22:15:48 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-05-11 22:15:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
May 11 22:15:59.446: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-978 /api/v1/namespaces/watch-978/configmaps/e2e-watch-test-label-changed 11bb7297-9d68-46c5-a50e-5be2fc37c01d 3536122 0 2020-05-11 22:15:48 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-05-11 22:15:59 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May 11 22:15:59.446: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-978 /api/v1/namespaces/watch-978/configmaps/e2e-watch-test-label-changed 11bb7297-9d68-46c5-a50e-5be2fc37c01d 3536123 0 2020-05-11 22:15:48 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-05-11 22:15:59 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
May 11 22:15:59.446: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-978 /api/v1/namespaces/watch-978/configmaps/e2e-watch-test-label-changed 11bb7297-9d68-46c5-a50e-5be2fc37c01d 3536124 0 2020-05-11 22:15:48 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-05-11 22:15:59 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:15:59.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-978" for this suite.

• [SLOW TEST:11.588 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":253,"skipped":4377,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:15:59.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
May 11 22:15:59.580: INFO: Waiting up to 5m0s for pod "pod-f45d36b2-ca01-4c0a-b4ad-cc1e8811d906" in namespace "emptydir-9953" to be "Succeeded or Failed"
May 11 22:15:59.605: INFO: Pod "pod-f45d36b2-ca01-4c0a-b4ad-cc1e8811d906": Phase="Pending", Reason="", readiness=false. Elapsed: 24.944106ms
May 11 22:16:01.938: INFO: Pod "pod-f45d36b2-ca01-4c0a-b4ad-cc1e8811d906": Phase="Pending", Reason="", readiness=false. Elapsed: 2.357294211s
May 11 22:16:03.941: INFO: Pod "pod-f45d36b2-ca01-4c0a-b4ad-cc1e8811d906": Phase="Pending", Reason="", readiness=false. Elapsed: 4.360408082s
May 11 22:16:06.234: INFO: Pod "pod-f45d36b2-ca01-4c0a-b4ad-cc1e8811d906": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.653679837s
STEP: Saw pod success
May 11 22:16:06.234: INFO: Pod "pod-f45d36b2-ca01-4c0a-b4ad-cc1e8811d906" satisfied condition "Succeeded or Failed"
May 11 22:16:06.237: INFO: Trying to get logs from node kali-worker pod pod-f45d36b2-ca01-4c0a-b4ad-cc1e8811d906 container test-container: 
STEP: delete the pod
May 11 22:16:07.129: INFO: Waiting for pod pod-f45d36b2-ca01-4c0a-b4ad-cc1e8811d906 to disappear
May 11 22:16:07.134: INFO: Pod pod-f45d36b2-ca01-4c0a-b4ad-cc1e8811d906 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:16:07.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9953" for this suite.

• [SLOW TEST:7.688 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":254,"skipped":4443,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:16:07.144: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
May 11 22:16:07.784: INFO: Waiting up to 5m0s for pod "pod-8d3852c2-5c0a-4bde-920d-67bcf899e693" in namespace "emptydir-6401" to be "Succeeded or Failed"
May 11 22:16:07.884: INFO: Pod "pod-8d3852c2-5c0a-4bde-920d-67bcf899e693": Phase="Pending", Reason="", readiness=false. Elapsed: 100.458115ms
May 11 22:16:09.888: INFO: Pod "pod-8d3852c2-5c0a-4bde-920d-67bcf899e693": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103987152s
May 11 22:16:11.951: INFO: Pod "pod-8d3852c2-5c0a-4bde-920d-67bcf899e693": Phase="Running", Reason="", readiness=true. Elapsed: 4.167474498s
May 11 22:16:13.955: INFO: Pod "pod-8d3852c2-5c0a-4bde-920d-67bcf899e693": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.171475923s
STEP: Saw pod success
May 11 22:16:13.955: INFO: Pod "pod-8d3852c2-5c0a-4bde-920d-67bcf899e693" satisfied condition "Succeeded or Failed"
May 11 22:16:13.958: INFO: Trying to get logs from node kali-worker2 pod pod-8d3852c2-5c0a-4bde-920d-67bcf899e693 container test-container: 
STEP: delete the pod
May 11 22:16:13.999: INFO: Waiting for pod pod-8d3852c2-5c0a-4bde-920d-67bcf899e693 to disappear
May 11 22:16:14.026: INFO: Pod pod-8d3852c2-5c0a-4bde-920d-67bcf899e693 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:16:14.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6401" for this suite.

• [SLOW TEST:6.915 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":255,"skipped":4446,"failed":0}
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:16:14.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
May 11 22:16:14.110: INFO: PodSpec: initContainers in spec.initContainers
May 11 22:17:16.143: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-30edec99-f740-42b5-bd77-db10fac73f8d", GenerateName:"", Namespace:"init-container-8676", SelfLink:"/api/v1/namespaces/init-container-8676/pods/pod-init-30edec99-f740-42b5-bd77-db10fac73f8d", UID:"cc884894-bbf6-4820-9567-2730a73dc3f8", ResourceVersion:"3536411", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724832174, loc:(*time.Location)(0x7b200c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"110110930"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00484c040), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00484c060)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00484c080), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00484c0a0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-b92gd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002258000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-b92gd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-b92gd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-b92gd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0028f82b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kali-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0019f2000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0028f8490)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0028f84b0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0028f84b8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0028f84bc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832174, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832174, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832174, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832174, loc:(*time.Location)(0x7b200c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.18", PodIP:"10.244.1.230", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.230"}}, StartTime:(*v1.Time)(0xc00484c0c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00484c100), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0019f20e0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://eabe1f057cd9a06ff6e564ad42a14350a7c77274521fc4cd87bb697081886e78", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00484c120), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00484c0e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0028f866f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:17:16.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8676" for this suite.

• [SLOW TEST:63.068 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":256,"skipped":4448,"failed":0}
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:17:17.127: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-6e67c506-519d-4846-8dac-faa134598a72
STEP: Creating a pod to test consume secrets
May 11 22:17:18.615: INFO: Waiting up to 5m0s for pod "pod-secrets-c56a1d20-cfad-4fcd-8b01-b5caf5413df6" in namespace "secrets-2678" to be "Succeeded or Failed"
May 11 22:17:18.849: INFO: Pod "pod-secrets-c56a1d20-cfad-4fcd-8b01-b5caf5413df6": Phase="Pending", Reason="", readiness=false. Elapsed: 233.721506ms
May 11 22:17:21.226: INFO: Pod "pod-secrets-c56a1d20-cfad-4fcd-8b01-b5caf5413df6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.610155637s
May 11 22:17:23.980: INFO: Pod "pod-secrets-c56a1d20-cfad-4fcd-8b01-b5caf5413df6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.364311404s
May 11 22:17:26.124: INFO: Pod "pod-secrets-c56a1d20-cfad-4fcd-8b01-b5caf5413df6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.508924551s
May 11 22:17:28.327: INFO: Pod "pod-secrets-c56a1d20-cfad-4fcd-8b01-b5caf5413df6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.711972369s
May 11 22:17:30.334: INFO: Pod "pod-secrets-c56a1d20-cfad-4fcd-8b01-b5caf5413df6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.718326324s
May 11 22:17:32.562: INFO: Pod "pod-secrets-c56a1d20-cfad-4fcd-8b01-b5caf5413df6": Phase="Running", Reason="", readiness=true. Elapsed: 13.946508049s
May 11 22:17:34.848: INFO: Pod "pod-secrets-c56a1d20-cfad-4fcd-8b01-b5caf5413df6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.232726667s
STEP: Saw pod success
May 11 22:17:34.848: INFO: Pod "pod-secrets-c56a1d20-cfad-4fcd-8b01-b5caf5413df6" satisfied condition "Succeeded or Failed"
May 11 22:17:34.850: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-c56a1d20-cfad-4fcd-8b01-b5caf5413df6 container secret-env-test: 
STEP: delete the pod
May 11 22:17:35.018: INFO: Waiting for pod pod-secrets-c56a1d20-cfad-4fcd-8b01-b5caf5413df6 to disappear
May 11 22:17:35.082: INFO: Pod pod-secrets-c56a1d20-cfad-4fcd-8b01-b5caf5413df6 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:17:35.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2678" for this suite.

• [SLOW TEST:17.963 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":257,"skipped":4448,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] LimitRange
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:17:35.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename limitrange
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a LimitRange
STEP: Setting up watch
STEP: Submitting a LimitRange
May 11 22:17:36.294: INFO: observed the limitRanges list
STEP: Verifying LimitRange creation was observed
STEP: Fetching the LimitRange to ensure it has proper values
May 11 22:17:36.873: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
May 11 22:17:36.873: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with no resource requirements
STEP: Ensuring Pod has resource requirements applied from LimitRange
May 11 22:17:37.269: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
May 11 22:17:37.269: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with partial resource requirements
STEP: Ensuring Pod has merged resource requirements applied from LimitRange
May 11 22:17:38.143: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}]
May 11 22:17:38.143: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Failing to create a Pod with less than min resources
STEP: Failing to create a Pod with more than max resources
STEP: Updating a LimitRange
STEP: Verifying LimitRange updating is effective
STEP: Creating a Pod with less than former min resources
STEP: Failing to create a Pod with more than max resources
STEP: Deleting a LimitRange
STEP: Verifying the LimitRange was deleted
May 11 22:17:46.175: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:17:46.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-9787" for this suite.

• [SLOW TEST:12.310 seconds]
[sig-scheduling] LimitRange
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":258,"skipped":4504,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:17:47.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-a8272c1e-b7f8-4ac6-afe1-6437d9d32a18
STEP: Creating a pod to test consume configMaps
May 11 22:17:50.564: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-97ac856e-a4f9-49dd-8478-2c77ed0c8245" in namespace "projected-3707" to be "Succeeded or Failed"
May 11 22:17:50.653: INFO: Pod "pod-projected-configmaps-97ac856e-a4f9-49dd-8478-2c77ed0c8245": Phase="Pending", Reason="", readiness=false. Elapsed: 89.017471ms
May 11 22:17:52.904: INFO: Pod "pod-projected-configmaps-97ac856e-a4f9-49dd-8478-2c77ed0c8245": Phase="Pending", Reason="", readiness=false. Elapsed: 2.339677134s
May 11 22:17:55.046: INFO: Pod "pod-projected-configmaps-97ac856e-a4f9-49dd-8478-2c77ed0c8245": Phase="Pending", Reason="", readiness=false. Elapsed: 4.481856693s
May 11 22:17:57.412: INFO: Pod "pod-projected-configmaps-97ac856e-a4f9-49dd-8478-2c77ed0c8245": Phase="Pending", Reason="", readiness=false. Elapsed: 6.847762818s
May 11 22:17:59.606: INFO: Pod "pod-projected-configmaps-97ac856e-a4f9-49dd-8478-2c77ed0c8245": Phase="Running", Reason="", readiness=true. Elapsed: 9.041700743s
May 11 22:18:01.670: INFO: Pod "pod-projected-configmaps-97ac856e-a4f9-49dd-8478-2c77ed0c8245": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.105783981s
STEP: Saw pod success
May 11 22:18:01.670: INFO: Pod "pod-projected-configmaps-97ac856e-a4f9-49dd-8478-2c77ed0c8245" satisfied condition "Succeeded or Failed"
May 11 22:18:01.789: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-97ac856e-a4f9-49dd-8478-2c77ed0c8245 container projected-configmap-volume-test: 
STEP: delete the pod
May 11 22:18:02.910: INFO: Waiting for pod pod-projected-configmaps-97ac856e-a4f9-49dd-8478-2c77ed0c8245 to disappear
May 11 22:18:03.155: INFO: Pod pod-projected-configmaps-97ac856e-a4f9-49dd-8478-2c77ed0c8245 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:18:03.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3707" for this suite.

• [SLOW TEST:15.761 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":259,"skipped":4519,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:18:03.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
May 11 22:18:29.956: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3735 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 11 22:18:29.956: INFO: >>> kubeConfig: /root/.kube/config
I0511 22:18:30.703700       7 log.go:172] (0xc0014242c0) (0xc002422c80) Create stream
I0511 22:18:30.703724       7 log.go:172] (0xc0014242c0) (0xc002422c80) Stream added, broadcasting: 1
I0511 22:18:30.705825       7 log.go:172] (0xc0014242c0) Reply frame received for 1
I0511 22:18:30.705857       7 log.go:172] (0xc0014242c0) (0xc002921f40) Create stream
I0511 22:18:30.705867       7 log.go:172] (0xc0014242c0) (0xc002921f40) Stream added, broadcasting: 3
I0511 22:18:30.706747       7 log.go:172] (0xc0014242c0) Reply frame received for 3
I0511 22:18:30.706776       7 log.go:172] (0xc0014242c0) (0xc002a36460) Create stream
I0511 22:18:30.706796       7 log.go:172] (0xc0014242c0) (0xc002a36460) Stream added, broadcasting: 5
I0511 22:18:30.707862       7 log.go:172] (0xc0014242c0) Reply frame received for 5
I0511 22:18:30.762714       7 log.go:172] (0xc0014242c0) Data frame received for 5
I0511 22:18:30.762751       7 log.go:172] (0xc002a36460) (5) Data frame handling
I0511 22:18:30.762796       7 log.go:172] (0xc0014242c0) Data frame received for 3
I0511 22:18:30.762814       7 log.go:172] (0xc002921f40) (3) Data frame handling
I0511 22:18:30.762832       7 log.go:172] (0xc002921f40) (3) Data frame sent
I0511 22:18:30.762844       7 log.go:172] (0xc0014242c0) Data frame received for 3
I0511 22:18:30.762853       7 log.go:172] (0xc002921f40) (3) Data frame handling
I0511 22:18:30.764054       7 log.go:172] (0xc0014242c0) Data frame received for 1
I0511 22:18:30.764082       7 log.go:172] (0xc002422c80) (1) Data frame handling
I0511 22:18:30.764098       7 log.go:172] (0xc002422c80) (1) Data frame sent
I0511 22:18:30.764117       7 log.go:172] (0xc0014242c0) (0xc002422c80) Stream removed, broadcasting: 1
I0511 22:18:30.764181       7 log.go:172] (0xc0014242c0) (0xc002422c80) Stream removed, broadcasting: 1
I0511 22:18:30.764200       7 log.go:172] (0xc0014242c0) (0xc002921f40) Stream removed, broadcasting: 3
I0511 22:18:30.764216       7 log.go:172] (0xc0014242c0) (0xc002a36460) Stream removed, broadcasting: 5
May 11 22:18:30.764: INFO: Exec stderr: ""
May 11 22:18:30.764: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3735 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 11 22:18:30.764: INFO: >>> kubeConfig: /root/.kube/config
I0511 22:18:30.765461       7 log.go:172] (0xc0014242c0) Go away received
I0511 22:18:30.798487       7 log.go:172] (0xc0014249a0) (0xc002423220) Create stream
I0511 22:18:30.798517       7 log.go:172] (0xc0014249a0) (0xc002423220) Stream added, broadcasting: 1
I0511 22:18:30.801786       7 log.go:172] (0xc0014249a0) Reply frame received for 1
I0511 22:18:30.801808       7 log.go:172] (0xc0014249a0) (0xc00120c5a0) Create stream
I0511 22:18:30.801816       7 log.go:172] (0xc0014249a0) (0xc00120c5a0) Stream added, broadcasting: 3
I0511 22:18:30.802494       7 log.go:172] (0xc0014249a0) Reply frame received for 3
I0511 22:18:30.802526       7 log.go:172] (0xc0014249a0) (0xc00120c6e0) Create stream
I0511 22:18:30.802536       7 log.go:172] (0xc0014249a0) (0xc00120c6e0) Stream added, broadcasting: 5
I0511 22:18:30.803201       7 log.go:172] (0xc0014249a0) Reply frame received for 5
I0511 22:18:30.852032       7 log.go:172] (0xc0014249a0) Data frame received for 3
I0511 22:18:30.852059       7 log.go:172] (0xc00120c5a0) (3) Data frame handling
I0511 22:18:30.852066       7 log.go:172] (0xc00120c5a0) (3) Data frame sent
I0511 22:18:30.852075       7 log.go:172] (0xc0014249a0) Data frame received for 3
I0511 22:18:30.852088       7 log.go:172] (0xc00120c5a0) (3) Data frame handling
I0511 22:18:30.852115       7 log.go:172] (0xc0014249a0) Data frame received for 5
I0511 22:18:30.852122       7 log.go:172] (0xc00120c6e0) (5) Data frame handling
I0511 22:18:30.853376       7 log.go:172] (0xc0014249a0) Data frame received for 1
I0511 22:18:30.853393       7 log.go:172] (0xc002423220) (1) Data frame handling
I0511 22:18:30.853401       7 log.go:172] (0xc002423220) (1) Data frame sent
I0511 22:18:30.853412       7 log.go:172] (0xc0014249a0) (0xc002423220) Stream removed, broadcasting: 1
I0511 22:18:30.853427       7 log.go:172] (0xc0014249a0) Go away received
I0511 22:18:30.853516       7 log.go:172] (0xc0014249a0) (0xc002423220) Stream removed, broadcasting: 1
I0511 22:18:30.853538       7 log.go:172] (0xc0014249a0) (0xc00120c5a0) Stream removed, broadcasting: 3
I0511 22:18:30.853553       7 log.go:172] (0xc0014249a0) (0xc00120c6e0) Stream removed, broadcasting: 5
May 11 22:18:30.853: INFO: Exec stderr: ""
May 11 22:18:30.853: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3735 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 11 22:18:30.853: INFO: >>> kubeConfig: /root/.kube/config
I0511 22:18:30.906298       7 log.go:172] (0xc001424fd0) (0xc002423400) Create stream
I0511 22:18:30.906324       7 log.go:172] (0xc001424fd0) (0xc002423400) Stream added, broadcasting: 1
I0511 22:18:30.907816       7 log.go:172] (0xc001424fd0) Reply frame received for 1
I0511 22:18:30.907851       7 log.go:172] (0xc001424fd0) (0xc002423540) Create stream
I0511 22:18:30.907861       7 log.go:172] (0xc001424fd0) (0xc002423540) Stream added, broadcasting: 3
I0511 22:18:30.908414       7 log.go:172] (0xc001424fd0) Reply frame received for 3
I0511 22:18:30.908441       7 log.go:172] (0xc001424fd0) (0xc0024235e0) Create stream
I0511 22:18:30.908450       7 log.go:172] (0xc001424fd0) (0xc0024235e0) Stream added, broadcasting: 5
I0511 22:18:30.908970       7 log.go:172] (0xc001424fd0) Reply frame received for 5
I0511 22:18:30.957898       7 log.go:172] (0xc001424fd0) Data frame received for 5
I0511 22:18:30.957943       7 log.go:172] (0xc0024235e0) (5) Data frame handling
I0511 22:18:30.957970       7 log.go:172] (0xc001424fd0) Data frame received for 3
I0511 22:18:30.957987       7 log.go:172] (0xc002423540) (3) Data frame handling
I0511 22:18:30.958004       7 log.go:172] (0xc002423540) (3) Data frame sent
I0511 22:18:30.958021       7 log.go:172] (0xc001424fd0) Data frame received for 3
I0511 22:18:30.958032       7 log.go:172] (0xc002423540) (3) Data frame handling
I0511 22:18:30.959348       7 log.go:172] (0xc001424fd0) Data frame received for 1
I0511 22:18:30.959372       7 log.go:172] (0xc002423400) (1) Data frame handling
I0511 22:18:30.959391       7 log.go:172] (0xc002423400) (1) Data frame sent
I0511 22:18:30.959415       7 log.go:172] (0xc001424fd0) (0xc002423400) Stream removed, broadcasting: 1
I0511 22:18:30.959435       7 log.go:172] (0xc001424fd0) Go away received
I0511 22:18:30.959568       7 log.go:172] (0xc001424fd0) (0xc002423400) Stream removed, broadcasting: 1
I0511 22:18:30.959595       7 log.go:172] (0xc001424fd0) (0xc002423540) Stream removed, broadcasting: 3
I0511 22:18:30.959606       7 log.go:172] (0xc001424fd0) (0xc0024235e0) Stream removed, broadcasting: 5
May 11 22:18:30.959: INFO: Exec stderr: ""
May 11 22:18:30.959: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3735 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 11 22:18:30.959: INFO: >>> kubeConfig: /root/.kube/config
I0511 22:18:30.993393       7 log.go:172] (0xc002542630) (0xc00120d5e0) Create stream
I0511 22:18:30.993421       7 log.go:172] (0xc002542630) (0xc00120d5e0) Stream added, broadcasting: 1
I0511 22:18:30.995775       7 log.go:172] (0xc002542630) Reply frame received for 1
I0511 22:18:30.995834       7 log.go:172] (0xc002542630) (0xc002c03ea0) Create stream
I0511 22:18:30.995853       7 log.go:172] (0xc002542630) (0xc002c03ea0) Stream added, broadcasting: 3
I0511 22:18:30.996684       7 log.go:172] (0xc002542630) Reply frame received for 3
I0511 22:18:30.996701       7 log.go:172] (0xc002542630) (0xc0024ac000) Create stream
I0511 22:18:30.996706       7 log.go:172] (0xc002542630) (0xc0024ac000) Stream added, broadcasting: 5
I0511 22:18:30.997658       7 log.go:172] (0xc002542630) Reply frame received for 5
I0511 22:18:31.047543       7 log.go:172] (0xc002542630) Data frame received for 5
I0511 22:18:31.047580       7 log.go:172] (0xc0024ac000) (5) Data frame handling
I0511 22:18:31.047619       7 log.go:172] (0xc002542630) Data frame received for 3
I0511 22:18:31.047661       7 log.go:172] (0xc002c03ea0) (3) Data frame handling
I0511 22:18:31.047680       7 log.go:172] (0xc002c03ea0) (3) Data frame sent
I0511 22:18:31.047693       7 log.go:172] (0xc002542630) Data frame received for 3
I0511 22:18:31.047702       7 log.go:172] (0xc002c03ea0) (3) Data frame handling
I0511 22:18:31.048880       7 log.go:172] (0xc002542630) Data frame received for 1
I0511 22:18:31.048920       7 log.go:172] (0xc00120d5e0) (1) Data frame handling
I0511 22:18:31.048967       7 log.go:172] (0xc00120d5e0) (1) Data frame sent
I0511 22:18:31.048985       7 log.go:172] (0xc002542630) (0xc00120d5e0) Stream removed, broadcasting: 1
I0511 22:18:31.048997       7 log.go:172] (0xc002542630) Go away received
I0511 22:18:31.049410       7 log.go:172] (0xc002542630) (0xc00120d5e0) Stream removed, broadcasting: 1
I0511 22:18:31.049433       7 log.go:172] (0xc002542630) (0xc002c03ea0) Stream removed, broadcasting: 3
I0511 22:18:31.049444       7 log.go:172] (0xc002542630) (0xc0024ac000) Stream removed, broadcasting: 5
May 11 22:18:31.049: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
May 11 22:18:31.049: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3735 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 11 22:18:31.049: INFO: >>> kubeConfig: /root/.kube/config
I0511 22:18:31.077974       7 log.go:172] (0xc002542e70) (0xc001ab0000) Create stream
I0511 22:18:31.077990       7 log.go:172] (0xc002542e70) (0xc001ab0000) Stream added, broadcasting: 1
I0511 22:18:31.079915       7 log.go:172] (0xc002542e70) Reply frame received for 1
I0511 22:18:31.079953       7 log.go:172] (0xc002542e70) (0xc0014a0000) Create stream
I0511 22:18:31.079968       7 log.go:172] (0xc002542e70) (0xc0014a0000) Stream added, broadcasting: 3
I0511 22:18:31.080803       7 log.go:172] (0xc002542e70) Reply frame received for 3
I0511 22:18:31.080827       7 log.go:172] (0xc002542e70) (0xc001ab0140) Create stream
I0511 22:18:31.080835       7 log.go:172] (0xc002542e70) (0xc001ab0140) Stream added, broadcasting: 5
I0511 22:18:31.081986       7 log.go:172] (0xc002542e70) Reply frame received for 5
I0511 22:18:31.139144       7 log.go:172] (0xc002542e70) Data frame received for 5
I0511 22:18:31.139177       7 log.go:172] (0xc001ab0140) (5) Data frame handling
I0511 22:18:31.139196       7 log.go:172] (0xc002542e70) Data frame received for 3
I0511 22:18:31.139206       7 log.go:172] (0xc0014a0000) (3) Data frame handling
I0511 22:18:31.139216       7 log.go:172] (0xc0014a0000) (3) Data frame sent
I0511 22:18:31.139225       7 log.go:172] (0xc002542e70) Data frame received for 3
I0511 22:18:31.139236       7 log.go:172] (0xc0014a0000) (3) Data frame handling
I0511 22:18:31.140356       7 log.go:172] (0xc002542e70) Data frame received for 1
I0511 22:18:31.140378       7 log.go:172] (0xc001ab0000) (1) Data frame handling
I0511 22:18:31.140392       7 log.go:172] (0xc001ab0000) (1) Data frame sent
I0511 22:18:31.140410       7 log.go:172] (0xc002542e70) (0xc001ab0000) Stream removed, broadcasting: 1
I0511 22:18:31.140435       7 log.go:172] (0xc002542e70) Go away received
I0511 22:18:31.140477       7 log.go:172] (0xc002542e70) (0xc001ab0000) Stream removed, broadcasting: 1
I0511 22:18:31.140498       7 log.go:172] (0xc002542e70) (0xc0014a0000) Stream removed, broadcasting: 3
I0511 22:18:31.140511       7 log.go:172] (0xc002542e70) (0xc001ab0140) Stream removed, broadcasting: 5
May 11 22:18:31.140: INFO: Exec stderr: ""
May 11 22:18:31.140: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3735 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 11 22:18:31.140: INFO: >>> kubeConfig: /root/.kube/config
I0511 22:18:31.164243       7 log.go:172] (0xc0025434a0) (0xc001ab05a0) Create stream
I0511 22:18:31.164266       7 log.go:172] (0xc0025434a0) (0xc001ab05a0) Stream added, broadcasting: 1
I0511 22:18:31.166914       7 log.go:172] (0xc0025434a0) Reply frame received for 1
I0511 22:18:31.166951       7 log.go:172] (0xc0025434a0) (0xc002423680) Create stream
I0511 22:18:31.166965       7 log.go:172] (0xc0025434a0) (0xc002423680) Stream added, broadcasting: 3
I0511 22:18:31.167791       7 log.go:172] (0xc0025434a0) Reply frame received for 3
I0511 22:18:31.167825       7 log.go:172] (0xc0025434a0) (0xc0024ac0a0) Create stream
I0511 22:18:31.167839       7 log.go:172] (0xc0025434a0) (0xc0024ac0a0) Stream added, broadcasting: 5
I0511 22:18:31.168590       7 log.go:172] (0xc0025434a0) Reply frame received for 5
I0511 22:18:31.221681       7 log.go:172] (0xc0025434a0) Data frame received for 5
I0511 22:18:31.221702       7 log.go:172] (0xc0024ac0a0) (5) Data frame handling
I0511 22:18:31.221720       7 log.go:172] (0xc0025434a0) Data frame received for 3
I0511 22:18:31.221738       7 log.go:172] (0xc002423680) (3) Data frame handling
I0511 22:18:31.221752       7 log.go:172] (0xc002423680) (3) Data frame sent
I0511 22:18:31.221760       7 log.go:172] (0xc0025434a0) Data frame received for 3
I0511 22:18:31.221777       7 log.go:172] (0xc002423680) (3) Data frame handling
I0511 22:18:31.222825       7 log.go:172] (0xc0025434a0) Data frame received for 1
I0511 22:18:31.222840       7 log.go:172] (0xc001ab05a0) (1) Data frame handling
I0511 22:18:31.222848       7 log.go:172] (0xc001ab05a0) (1) Data frame sent
I0511 22:18:31.222857       7 log.go:172] (0xc0025434a0) (0xc001ab05a0) Stream removed, broadcasting: 1
I0511 22:18:31.222867       7 log.go:172] (0xc0025434a0) Go away received
I0511 22:18:31.222996       7 log.go:172] (0xc0025434a0) (0xc001ab05a0) Stream removed, broadcasting: 1
I0511 22:18:31.223021       7 log.go:172] (0xc0025434a0) (0xc002423680) Stream removed, broadcasting: 3
I0511 22:18:31.223040       7 log.go:172] (0xc0025434a0) (0xc0024ac0a0) Stream removed, broadcasting: 5
May 11 22:18:31.223: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
May 11 22:18:31.223: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3735 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 11 22:18:31.223: INFO: >>> kubeConfig: /root/.kube/config
I0511 22:18:31.248338       7 log.go:172] (0xc002543b80) (0xc001ab0780) Create stream
I0511 22:18:31.248362       7 log.go:172] (0xc002543b80) (0xc001ab0780) Stream added, broadcasting: 1
I0511 22:18:31.249946       7 log.go:172] (0xc002543b80) Reply frame received for 1
I0511 22:18:31.249974       7 log.go:172] (0xc002543b80) (0xc001ab08c0) Create stream
I0511 22:18:31.249988       7 log.go:172] (0xc002543b80) (0xc001ab08c0) Stream added, broadcasting: 3
I0511 22:18:31.250714       7 log.go:172] (0xc002543b80) Reply frame received for 3
I0511 22:18:31.250730       7 log.go:172] (0xc002543b80) (0xc002a365a0) Create stream
I0511 22:18:31.250736       7 log.go:172] (0xc002543b80) (0xc002a365a0) Stream added, broadcasting: 5
I0511 22:18:31.251461       7 log.go:172] (0xc002543b80) Reply frame received for 5
I0511 22:18:31.303286       7 log.go:172] (0xc002543b80) Data frame received for 3
I0511 22:18:31.303310       7 log.go:172] (0xc001ab08c0) (3) Data frame handling
I0511 22:18:31.303319       7 log.go:172] (0xc001ab08c0) (3) Data frame sent
I0511 22:18:31.303327       7 log.go:172] (0xc002543b80) Data frame received for 3
I0511 22:18:31.303334       7 log.go:172] (0xc001ab08c0) (3) Data frame handling
I0511 22:18:31.303353       7 log.go:172] (0xc002543b80) Data frame received for 5
I0511 22:18:31.303365       7 log.go:172] (0xc002a365a0) (5) Data frame handling
I0511 22:18:31.304420       7 log.go:172] (0xc002543b80) Data frame received for 1
I0511 22:18:31.304437       7 log.go:172] (0xc001ab0780) (1) Data frame handling
I0511 22:18:31.304446       7 log.go:172] (0xc001ab0780) (1) Data frame sent
I0511 22:18:31.304457       7 log.go:172] (0xc002543b80) (0xc001ab0780) Stream removed, broadcasting: 1
I0511 22:18:31.304541       7 log.go:172] (0xc002543b80) (0xc001ab0780) Stream removed, broadcasting: 1
I0511 22:18:31.304563       7 log.go:172] (0xc002543b80) (0xc001ab08c0) Stream removed, broadcasting: 3
I0511 22:18:31.304571       7 log.go:172] (0xc002543b80) (0xc002a365a0) Stream removed, broadcasting: 5
May 11 22:18:31.304: INFO: Exec stderr: ""
May 11 22:18:31.304: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3735 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 11 22:18:31.304: INFO: >>> kubeConfig: /root/.kube/config
I0511 22:18:31.304646       7 log.go:172] (0xc002543b80) Go away received
I0511 22:18:31.330639       7 log.go:172] (0xc001425600) (0xc0024239a0) Create stream
I0511 22:18:31.330672       7 log.go:172] (0xc001425600) (0xc0024239a0) Stream added, broadcasting: 1
I0511 22:18:31.332915       7 log.go:172] (0xc001425600) Reply frame received for 1
I0511 22:18:31.332941       7 log.go:172] (0xc001425600) (0xc002423a40) Create stream
I0511 22:18:31.332949       7 log.go:172] (0xc001425600) (0xc002423a40) Stream added, broadcasting: 3
I0511 22:18:31.333727       7 log.go:172] (0xc001425600) Reply frame received for 3
I0511 22:18:31.333751       7 log.go:172] (0xc001425600) (0xc0024ac140) Create stream
I0511 22:18:31.333759       7 log.go:172] (0xc001425600) (0xc0024ac140) Stream added, broadcasting: 5
I0511 22:18:31.334432       7 log.go:172] (0xc001425600) Reply frame received for 5
I0511 22:18:31.390606       7 log.go:172] (0xc001425600) Data frame received for 3
I0511 22:18:31.390638       7 log.go:172] (0xc002423a40) (3) Data frame handling
I0511 22:18:31.390645       7 log.go:172] (0xc002423a40) (3) Data frame sent
I0511 22:18:31.390662       7 log.go:172] (0xc001425600) Data frame received for 3
I0511 22:18:31.390676       7 log.go:172] (0xc002423a40) (3) Data frame handling
I0511 22:18:31.390690       7 log.go:172] (0xc001425600) Data frame received for 5
I0511 22:18:31.390697       7 log.go:172] (0xc0024ac140) (5) Data frame handling
I0511 22:18:31.391658       7 log.go:172] (0xc001425600) Data frame received for 1
I0511 22:18:31.391688       7 log.go:172] (0xc0024239a0) (1) Data frame handling
I0511 22:18:31.391709       7 log.go:172] (0xc0024239a0) (1) Data frame sent
I0511 22:18:31.391730       7 log.go:172] (0xc001425600) (0xc0024239a0) Stream removed, broadcasting: 1
I0511 22:18:31.391789       7 log.go:172] (0xc001425600) (0xc0024239a0) Stream removed, broadcasting: 1
I0511 22:18:31.391816       7 log.go:172] (0xc001425600) (0xc002423a40) Stream removed, broadcasting: 3
I0511 22:18:31.391842       7 log.go:172] (0xc001425600) (0xc0024ac140) Stream removed, broadcasting: 5
May 11 22:18:31.391: INFO: Exec stderr: ""
May 11 22:18:31.391: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3735 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
I0511 22:18:31.391908       7 log.go:172] (0xc001425600) Go away received
May 11 22:18:31.391: INFO: >>> kubeConfig: /root/.kube/config
I0511 22:18:31.415560       7 log.go:172] (0xc001f90370) (0xc002a36aa0) Create stream
I0511 22:18:31.415585       7 log.go:172] (0xc001f90370) (0xc002a36aa0) Stream added, broadcasting: 1
I0511 22:18:31.417590       7 log.go:172] (0xc001f90370) Reply frame received for 1
I0511 22:18:31.417615       7 log.go:172] (0xc001f90370) (0xc0014a00a0) Create stream
I0511 22:18:31.417624       7 log.go:172] (0xc001f90370) (0xc0014a00a0) Stream added, broadcasting: 3
I0511 22:18:31.418409       7 log.go:172] (0xc001f90370) Reply frame received for 3
I0511 22:18:31.418436       7 log.go:172] (0xc001f90370) (0xc002a36be0) Create stream
I0511 22:18:31.418445       7 log.go:172] (0xc001f90370) (0xc002a36be0) Stream added, broadcasting: 5
I0511 22:18:31.419177       7 log.go:172] (0xc001f90370) Reply frame received for 5
I0511 22:18:31.470296       7 log.go:172] (0xc001f90370) Data frame received for 5
I0511 22:18:31.470326       7 log.go:172] (0xc002a36be0) (5) Data frame handling
I0511 22:18:31.470344       7 log.go:172] (0xc001f90370) Data frame received for 3
I0511 22:18:31.470368       7 log.go:172] (0xc0014a00a0) (3) Data frame handling
I0511 22:18:31.470380       7 log.go:172] (0xc0014a00a0) (3) Data frame sent
I0511 22:18:31.470422       7 log.go:172] (0xc001f90370) Data frame received for 3
I0511 22:18:31.470440       7 log.go:172] (0xc0014a00a0) (3) Data frame handling
I0511 22:18:31.471705       7 log.go:172] (0xc001f90370) Data frame received for 1
I0511 22:18:31.471720       7 log.go:172] (0xc002a36aa0) (1) Data frame handling
I0511 22:18:31.471727       7 log.go:172] (0xc002a36aa0) (1) Data frame sent
I0511 22:18:31.471740       7 log.go:172] (0xc001f90370) (0xc002a36aa0) Stream removed, broadcasting: 1
I0511 22:18:31.471805       7 log.go:172] (0xc001f90370) (0xc002a36aa0) Stream removed, broadcasting: 1
I0511 22:18:31.471818       7 log.go:172] (0xc001f90370) (0xc0014a00a0) Stream removed, broadcasting: 3
I0511 22:18:31.471831       7 log.go:172] (0xc001f90370) (0xc002a36be0) Stream removed, broadcasting: 5
May 11 22:18:31.471: INFO: Exec stderr: ""
May 11 22:18:31.471: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3735 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 11 22:18:31.471: INFO: >>> kubeConfig: /root/.kube/config
I0511 22:18:31.471891       7 log.go:172] (0xc001f90370) Go away received
I0511 22:18:31.494667       7 log.go:172] (0xc00200cbb0) (0xc0024ac3c0) Create stream
I0511 22:18:31.494690       7 log.go:172] (0xc00200cbb0) (0xc0024ac3c0) Stream added, broadcasting: 1
I0511 22:18:31.496199       7 log.go:172] (0xc00200cbb0) Reply frame received for 1
I0511 22:18:31.496218       7 log.go:172] (0xc00200cbb0) (0xc002423ae0) Create stream
I0511 22:18:31.496224       7 log.go:172] (0xc00200cbb0) (0xc002423ae0) Stream added, broadcasting: 3
I0511 22:18:31.496972       7 log.go:172] (0xc00200cbb0) Reply frame received for 3
I0511 22:18:31.497013       7 log.go:172] (0xc00200cbb0) (0xc002423b80) Create stream
I0511 22:18:31.497032       7 log.go:172] (0xc00200cbb0) (0xc002423b80) Stream added, broadcasting: 5
I0511 22:18:31.497876       7 log.go:172] (0xc00200cbb0) Reply frame received for 5
I0511 22:18:31.544351       7 log.go:172] (0xc00200cbb0) Data frame received for 5
I0511 22:18:31.544376       7 log.go:172] (0xc002423b80) (5) Data frame handling
I0511 22:18:31.544390       7 log.go:172] (0xc00200cbb0) Data frame received for 3
I0511 22:18:31.544398       7 log.go:172] (0xc002423ae0) (3) Data frame handling
I0511 22:18:31.544410       7 log.go:172] (0xc002423ae0) (3) Data frame sent
I0511 22:18:31.544422       7 log.go:172] (0xc00200cbb0) Data frame received for 3
I0511 22:18:31.544431       7 log.go:172] (0xc002423ae0) (3) Data frame handling
I0511 22:18:31.545399       7 log.go:172] (0xc00200cbb0) Data frame received for 1
I0511 22:18:31.545417       7 log.go:172] (0xc0024ac3c0) (1) Data frame handling
I0511 22:18:31.545427       7 log.go:172] (0xc0024ac3c0) (1) Data frame sent
I0511 22:18:31.545434       7 log.go:172] (0xc00200cbb0) (0xc0024ac3c0) Stream removed, broadcasting: 1
I0511 22:18:31.545479       7 log.go:172] (0xc00200cbb0) Go away received
I0511 22:18:31.545502       7 log.go:172] (0xc00200cbb0) (0xc0024ac3c0) Stream removed, broadcasting: 1
I0511 22:18:31.545517       7 log.go:172] (0xc00200cbb0) (0xc002423ae0) Stream removed, broadcasting: 3
I0511 22:18:31.545526       7 log.go:172] (0xc00200cbb0) (0xc002423b80) Stream removed, broadcasting: 5
May 11 22:18:31.545: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:18:31.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-3735" for this suite.

• [SLOW TEST:28.389 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":260,"skipped":4532,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:18:31.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
May 11 22:18:32.450: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
May 11 22:18:44.910: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:18:44.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1040" for this suite.

• [SLOW TEST:13.367 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":261,"skipped":4558,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:18:44.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-3e315f23-9969-4324-aed9-2019bec3b9f4
STEP: Creating configMap with name cm-test-opt-upd-e32befc5-244f-4f88-ba89-5c982920586a
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-3e315f23-9969-4324-aed9-2019bec3b9f4
STEP: Updating configmap cm-test-opt-upd-e32befc5-244f-4f88-ba89-5c982920586a
STEP: Creating configMap with name cm-test-opt-create-3e733cbf-5865-448e-8745-afcfdee146ed
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:20:15.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4595" for this suite.

• [SLOW TEST:90.243 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":262,"skipped":4577,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:20:15.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
May 11 22:20:15.300: INFO: Waiting up to 5m0s for pod "pod-3de44c83-ac81-4bde-b039-5f77749f5900" in namespace "emptydir-7855" to be "Succeeded or Failed"
May 11 22:20:15.302: INFO: Pod "pod-3de44c83-ac81-4bde-b039-5f77749f5900": Phase="Pending", Reason="", readiness=false. Elapsed: 2.850206ms
May 11 22:20:17.382: INFO: Pod "pod-3de44c83-ac81-4bde-b039-5f77749f5900": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08270622s
May 11 22:20:19.385: INFO: Pod "pod-3de44c83-ac81-4bde-b039-5f77749f5900": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08561981s
May 11 22:20:21.567: INFO: Pod "pod-3de44c83-ac81-4bde-b039-5f77749f5900": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.267769706s
STEP: Saw pod success
May 11 22:20:21.567: INFO: Pod "pod-3de44c83-ac81-4bde-b039-5f77749f5900" satisfied condition "Succeeded or Failed"
May 11 22:20:21.571: INFO: Trying to get logs from node kali-worker pod pod-3de44c83-ac81-4bde-b039-5f77749f5900 container test-container: 
STEP: delete the pod
May 11 22:20:22.136: INFO: Waiting for pod pod-3de44c83-ac81-4bde-b039-5f77749f5900 to disappear
May 11 22:20:22.257: INFO: Pod pod-3de44c83-ac81-4bde-b039-5f77749f5900 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:20:22.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7855" for this suite.

• [SLOW TEST:7.599 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":263,"skipped":4585,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:20:22.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-aa3369c0-4c36-4d4a-9ce2-14213181097c
STEP: Creating a pod to test consume configMaps
May 11 22:20:23.648: INFO: Waiting up to 5m0s for pod "pod-configmaps-75ee5d99-6086-4e36-84fe-98e08245ee52" in namespace "configmap-2192" to be "Succeeded or Failed"
May 11 22:20:23.662: INFO: Pod "pod-configmaps-75ee5d99-6086-4e36-84fe-98e08245ee52": Phase="Pending", Reason="", readiness=false. Elapsed: 14.468259ms
May 11 22:20:25.666: INFO: Pod "pod-configmaps-75ee5d99-6086-4e36-84fe-98e08245ee52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017843519s
May 11 22:20:28.048: INFO: Pod "pod-configmaps-75ee5d99-6086-4e36-84fe-98e08245ee52": Phase="Pending", Reason="", readiness=false. Elapsed: 4.399754899s
May 11 22:20:30.051: INFO: Pod "pod-configmaps-75ee5d99-6086-4e36-84fe-98e08245ee52": Phase="Pending", Reason="", readiness=false. Elapsed: 6.402818593s
May 11 22:20:32.536: INFO: Pod "pod-configmaps-75ee5d99-6086-4e36-84fe-98e08245ee52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.888056905s
STEP: Saw pod success
May 11 22:20:32.536: INFO: Pod "pod-configmaps-75ee5d99-6086-4e36-84fe-98e08245ee52" satisfied condition "Succeeded or Failed"
May 11 22:20:32.541: INFO: Trying to get logs from node kali-worker pod pod-configmaps-75ee5d99-6086-4e36-84fe-98e08245ee52 container configmap-volume-test: 
STEP: delete the pod
May 11 22:20:33.186: INFO: Waiting for pod pod-configmaps-75ee5d99-6086-4e36-84fe-98e08245ee52 to disappear
May 11 22:20:33.567: INFO: Pod pod-configmaps-75ee5d99-6086-4e36-84fe-98e08245ee52 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:20:33.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2192" for this suite.

• [SLOW TEST:10.834 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":264,"skipped":4597,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:20:33.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 11 22:20:34.467: INFO: The status of Pod test-webserver-4530107e-967b-46bc-8461-dd6fe16492bf is Pending, waiting for it to be Running (with Ready = true)
May 11 22:20:36.501: INFO: The status of Pod test-webserver-4530107e-967b-46bc-8461-dd6fe16492bf is Pending, waiting for it to be Running (with Ready = true)
May 11 22:20:38.471: INFO: The status of Pod test-webserver-4530107e-967b-46bc-8461-dd6fe16492bf is Pending, waiting for it to be Running (with Ready = true)
May 11 22:20:40.471: INFO: The status of Pod test-webserver-4530107e-967b-46bc-8461-dd6fe16492bf is Running (Ready = false)
May 11 22:20:42.472: INFO: The status of Pod test-webserver-4530107e-967b-46bc-8461-dd6fe16492bf is Running (Ready = false)
May 11 22:20:44.471: INFO: The status of Pod test-webserver-4530107e-967b-46bc-8461-dd6fe16492bf is Running (Ready = false)
May 11 22:20:46.471: INFO: The status of Pod test-webserver-4530107e-967b-46bc-8461-dd6fe16492bf is Running (Ready = false)
May 11 22:20:48.471: INFO: The status of Pod test-webserver-4530107e-967b-46bc-8461-dd6fe16492bf is Running (Ready = false)
May 11 22:20:50.495: INFO: The status of Pod test-webserver-4530107e-967b-46bc-8461-dd6fe16492bf is Running (Ready = false)
May 11 22:20:52.856: INFO: The status of Pod test-webserver-4530107e-967b-46bc-8461-dd6fe16492bf is Running (Ready = false)
May 11 22:20:54.527: INFO: The status of Pod test-webserver-4530107e-967b-46bc-8461-dd6fe16492bf is Running (Ready = false)
May 11 22:20:56.474: INFO: The status of Pod test-webserver-4530107e-967b-46bc-8461-dd6fe16492bf is Running (Ready = false)
May 11 22:20:58.472: INFO: The status of Pod test-webserver-4530107e-967b-46bc-8461-dd6fe16492bf is Running (Ready = true)
May 11 22:20:58.475: INFO: Container started at 2020-05-11 22:20:37 +0000 UTC, pod became ready at 2020-05-11 22:20:56 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:20:58.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1839" for this suite.

• [SLOW TEST:24.889 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":265,"skipped":4641,"failed":0}
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:20:58.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on tmpfs
May 11 22:20:58.731: INFO: Waiting up to 5m0s for pod "pod-081411ec-e32c-4658-9aac-d19a01dad53c" in namespace "emptydir-6174" to be "Succeeded or Failed"
May 11 22:20:59.102: INFO: Pod "pod-081411ec-e32c-4658-9aac-d19a01dad53c": Phase="Pending", Reason="", readiness=false. Elapsed: 370.697785ms
May 11 22:21:01.105: INFO: Pod "pod-081411ec-e32c-4658-9aac-d19a01dad53c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.374023736s
May 11 22:21:03.108: INFO: Pod "pod-081411ec-e32c-4658-9aac-d19a01dad53c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.377354687s
May 11 22:21:05.118: INFO: Pod "pod-081411ec-e32c-4658-9aac-d19a01dad53c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.387156618s
STEP: Saw pod success
May 11 22:21:05.118: INFO: Pod "pod-081411ec-e32c-4658-9aac-d19a01dad53c" satisfied condition "Succeeded or Failed"
May 11 22:21:05.120: INFO: Trying to get logs from node kali-worker2 pod pod-081411ec-e32c-4658-9aac-d19a01dad53c container test-container: 
STEP: delete the pod
May 11 22:21:05.223: INFO: Waiting for pod pod-081411ec-e32c-4658-9aac-d19a01dad53c to disappear
May 11 22:21:05.286: INFO: Pod pod-081411ec-e32c-4658-9aac-d19a01dad53c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:21:05.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6174" for this suite.

• [SLOW TEST:6.807 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":266,"skipped":4641,"failed":0}
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:21:05.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
May 11 22:21:05.876: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
May 11 22:21:05.888: INFO: Waiting for terminating namespaces to be deleted...
May 11 22:21:05.891: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
May 11 22:21:05.898: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 11 22:21:05.898: INFO: 	Container kindnet-cni ready: true, restart count 1
May 11 22:21:05.898: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 11 22:21:05.898: INFO: 	Container kube-proxy ready: true, restart count 0
May 11 22:21:05.898: INFO: test-webserver-4530107e-967b-46bc-8461-dd6fe16492bf from container-probe-1839 started at 2020-05-11 22:20:34 +0000 UTC (1 container statuses recorded)
May 11 22:21:05.898: INFO: 	Container test-webserver ready: true, restart count 0
May 11 22:21:05.898: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
May 11 22:21:05.902: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 11 22:21:05.902: INFO: 	Container kindnet-cni ready: true, restart count 0
May 11 22:21:05.902: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 11 22:21:05.902: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-023544c2-ba50-40b9-a85d-8cf21768b659 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-023544c2-ba50-40b9-a85d-8cf21768b659 off the node kali-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-023544c2-ba50-40b9-a85d-8cf21768b659
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:21:25.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-805" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:20.445 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":267,"skipped":4643,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:21:25.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating replication controller my-hostname-basic-0a8f0439-fc48-43d9-9f91-ea01503cc92f
May 11 22:21:26.630: INFO: Pod name my-hostname-basic-0a8f0439-fc48-43d9-9f91-ea01503cc92f: Found 0 pods out of 1
May 11 22:21:31.665: INFO: Pod name my-hostname-basic-0a8f0439-fc48-43d9-9f91-ea01503cc92f: Found 1 pods out of 1
May 11 22:21:31.665: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-0a8f0439-fc48-43d9-9f91-ea01503cc92f" are running
May 11 22:21:33.767: INFO: Pod "my-hostname-basic-0a8f0439-fc48-43d9-9f91-ea01503cc92f-g5zwl" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 22:21:26 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 22:21:26 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-0a8f0439-fc48-43d9-9f91-ea01503cc92f]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 22:21:26 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-0a8f0439-fc48-43d9-9f91-ea01503cc92f]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 22:21:26 +0000 UTC Reason: Message:}])
May 11 22:21:33.767: INFO: Trying to dial the pod
May 11 22:21:38.779: INFO: Controller my-hostname-basic-0a8f0439-fc48-43d9-9f91-ea01503cc92f: Got expected result from replica 1 [my-hostname-basic-0a8f0439-fc48-43d9-9f91-ea01503cc92f-g5zwl]: "my-hostname-basic-0a8f0439-fc48-43d9-9f91-ea01503cc92f-g5zwl", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:21:38.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5088" for this suite.

• [SLOW TEST:13.048 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":268,"skipped":4655,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:21:38.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 11 22:21:40.325: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 11 22:21:42.336: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832500, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832500, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832500, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832500, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 11 22:21:44.414: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832500, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832500, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832500, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832500, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 11 22:21:47.371: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:21:47.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9729" for this suite.
STEP: Destroying namespace "webhook-9729-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.460 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":269,"skipped":4657,"failed":0}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:21:48.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 11 22:21:48.324: INFO: Waiting up to 5m0s for pod "downwardapi-volume-59a4abd0-576c-4330-949b-b2651eb85fb3" in namespace "downward-api-4632" to be "Succeeded or Failed"
May 11 22:21:48.344: INFO: Pod "downwardapi-volume-59a4abd0-576c-4330-949b-b2651eb85fb3": Phase="Pending", Reason="", readiness=false. Elapsed: 19.882354ms
May 11 22:21:50.347: INFO: Pod "downwardapi-volume-59a4abd0-576c-4330-949b-b2651eb85fb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022484139s
May 11 22:21:52.475: INFO: Pod "downwardapi-volume-59a4abd0-576c-4330-949b-b2651eb85fb3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150893829s
May 11 22:21:54.478: INFO: Pod "downwardapi-volume-59a4abd0-576c-4330-949b-b2651eb85fb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.154080627s
STEP: Saw pod success
May 11 22:21:54.478: INFO: Pod "downwardapi-volume-59a4abd0-576c-4330-949b-b2651eb85fb3" satisfied condition "Succeeded or Failed"
May 11 22:21:54.481: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-59a4abd0-576c-4330-949b-b2651eb85fb3 container client-container: 
STEP: delete the pod
May 11 22:21:54.570: INFO: Waiting for pod downwardapi-volume-59a4abd0-576c-4330-949b-b2651eb85fb3 to disappear
May 11 22:21:54.640: INFO: Pod downwardapi-volume-59a4abd0-576c-4330-949b-b2651eb85fb3 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:21:54.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4632" for this suite.

• [SLOW TEST:6.400 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":270,"skipped":4663,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:21:54.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-e4bf1858-fc12-45e8-a6e4-e5ddc2dbfb37
STEP: Creating a pod to test consume configMaps
May 11 22:21:54.745: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f09a1ad0-1886-4504-a556-7f256a3b17d8" in namespace "projected-7747" to be "Succeeded or Failed"
May 11 22:21:54.749: INFO: Pod "pod-projected-configmaps-f09a1ad0-1886-4504-a556-7f256a3b17d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.386876ms
May 11 22:21:56.779: INFO: Pod "pod-projected-configmaps-f09a1ad0-1886-4504-a556-7f256a3b17d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033663176s
May 11 22:21:58.898: INFO: Pod "pod-projected-configmaps-f09a1ad0-1886-4504-a556-7f256a3b17d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.152998103s
STEP: Saw pod success
May 11 22:21:58.898: INFO: Pod "pod-projected-configmaps-f09a1ad0-1886-4504-a556-7f256a3b17d8" satisfied condition "Succeeded or Failed"
May 11 22:21:58.900: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-f09a1ad0-1886-4504-a556-7f256a3b17d8 container projected-configmap-volume-test: 
STEP: delete the pod
May 11 22:21:58.942: INFO: Waiting for pod pod-projected-configmaps-f09a1ad0-1886-4504-a556-7f256a3b17d8 to disappear
May 11 22:21:58.958: INFO: Pod pod-projected-configmaps-f09a1ad0-1886-4504-a556-7f256a3b17d8 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:21:58.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7747" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":271,"skipped":4668,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:21:58.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod test-webserver-60345441-78cb-48a8-afd4-5b775c956b69 in namespace container-probe-5262
May 11 22:22:07.472: INFO: Started pod test-webserver-60345441-78cb-48a8-afd4-5b775c956b69 in namespace container-probe-5262
STEP: checking the pod's current state and verifying that restartCount is present
May 11 22:22:07.474: INFO: Initial restart count of pod test-webserver-60345441-78cb-48a8-afd4-5b775c956b69 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:26:08.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5262" for this suite.

• [SLOW TEST:249.973 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":272,"skipped":4687,"failed":0}
SSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:26:08.937: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's args
May 11 22:26:10.532: INFO: Waiting up to 5m0s for pod "var-expansion-518f94b3-b2f5-4747-92d0-ee6eaab8dbcc" in namespace "var-expansion-5017" to be "Succeeded or Failed"
May 11 22:26:10.740: INFO: Pod "var-expansion-518f94b3-b2f5-4747-92d0-ee6eaab8dbcc": Phase="Pending", Reason="", readiness=false. Elapsed: 207.464794ms
May 11 22:26:12.901: INFO: Pod "var-expansion-518f94b3-b2f5-4747-92d0-ee6eaab8dbcc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.368524177s
May 11 22:26:14.915: INFO: Pod "var-expansion-518f94b3-b2f5-4747-92d0-ee6eaab8dbcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.382646687s
STEP: Saw pod success
May 11 22:26:14.915: INFO: Pod "var-expansion-518f94b3-b2f5-4747-92d0-ee6eaab8dbcc" satisfied condition "Succeeded or Failed"
May 11 22:26:14.917: INFO: Trying to get logs from node kali-worker pod var-expansion-518f94b3-b2f5-4747-92d0-ee6eaab8dbcc container dapi-container: 
STEP: delete the pod
May 11 22:26:15.447: INFO: Waiting for pod var-expansion-518f94b3-b2f5-4747-92d0-ee6eaab8dbcc to disappear
May 11 22:26:15.492: INFO: Pod var-expansion-518f94b3-b2f5-4747-92d0-ee6eaab8dbcc no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:26:15.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5017" for this suite.

• [SLOW TEST:6.563 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":273,"skipped":4693,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:26:15.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-7e2c3cc8-5373-4944-87f8-92c3c1a6427e
STEP: Creating a pod to test consume configMaps
May 11 22:26:15.641: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-09b645d2-02a5-46b2-ae23-6b94ec2d7beb" in namespace "projected-9470" to be "Succeeded or Failed"
May 11 22:26:15.828: INFO: Pod "pod-projected-configmaps-09b645d2-02a5-46b2-ae23-6b94ec2d7beb": Phase="Pending", Reason="", readiness=false. Elapsed: 187.210902ms
May 11 22:26:17.832: INFO: Pod "pod-projected-configmaps-09b645d2-02a5-46b2-ae23-6b94ec2d7beb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190620418s
May 11 22:26:19.859: INFO: Pod "pod-projected-configmaps-09b645d2-02a5-46b2-ae23-6b94ec2d7beb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.217342478s
May 11 22:26:21.862: INFO: Pod "pod-projected-configmaps-09b645d2-02a5-46b2-ae23-6b94ec2d7beb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.221195105s
STEP: Saw pod success
May 11 22:26:21.863: INFO: Pod "pod-projected-configmaps-09b645d2-02a5-46b2-ae23-6b94ec2d7beb" satisfied condition "Succeeded or Failed"
May 11 22:26:21.865: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-09b645d2-02a5-46b2-ae23-6b94ec2d7beb container projected-configmap-volume-test: 
STEP: delete the pod
May 11 22:26:21.933: INFO: Waiting for pod pod-projected-configmaps-09b645d2-02a5-46b2-ae23-6b94ec2d7beb to disappear
May 11 22:26:21.978: INFO: Pod pod-projected-configmaps-09b645d2-02a5-46b2-ae23-6b94ec2d7beb no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:26:21.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9470" for this suite.

• [SLOW TEST:6.487 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":274,"skipped":4704,"failed":0}
SSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 11 22:26:21.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 11 22:26:22.417: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d1af8778-b43e-4696-9775-4b2d6b9a6b1d" in namespace "downward-api-803" to be "Succeeded or Failed"
May 11 22:26:22.595: INFO: Pod "downwardapi-volume-d1af8778-b43e-4696-9775-4b2d6b9a6b1d": Phase="Pending", Reason="", readiness=false. Elapsed: 177.613008ms
May 11 22:26:24.954: INFO: Pod "downwardapi-volume-d1af8778-b43e-4696-9775-4b2d6b9a6b1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.536630024s
May 11 22:26:26.958: INFO: Pod "downwardapi-volume-d1af8778-b43e-4696-9775-4b2d6b9a6b1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.540459691s
STEP: Saw pod success
May 11 22:26:26.958: INFO: Pod "downwardapi-volume-d1af8778-b43e-4696-9775-4b2d6b9a6b1d" satisfied condition "Succeeded or Failed"
May 11 22:26:26.962: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-d1af8778-b43e-4696-9775-4b2d6b9a6b1d container client-container: 
STEP: delete the pod
May 11 22:26:27.332: INFO: Waiting for pod downwardapi-volume-d1af8778-b43e-4696-9775-4b2d6b9a6b1d to disappear
May 11 22:26:27.409: INFO: Pod downwardapi-volume-d1af8778-b43e-4696-9775-4b2d6b9a6b1d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 11 22:26:27.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-803" for this suite.

• [SLOW TEST:5.427 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":275,"skipped":4707,"failed":0}
SSSSSSSSSSMay 11 22:26:27.414: INFO: Running AfterSuite actions on all nodes
May 11 22:26:27.414: INFO: Running AfterSuite actions on node 1
May 11 22:26:27.414: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0}

Ran 275 of 4992 Specs in 6209.129 seconds
SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped
PASS