I0512 12:28:44.331654 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0512 12:28:44.331839 7 e2e.go:124] Starting e2e run "252bd1e4-b42f-4955-b898-e36082558cf5" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589286523 - Will randomize all specs Will run 275 of 4992 specs May 12 12:28:44.388: INFO: >>> kubeConfig: /root/.kube/config May 12 12:28:44.390: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 12 12:28:44.415: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 12 12:28:44.447: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 12 12:28:44.447: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 12 12:28:44.447: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 12 12:28:44.455: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 12 12:28:44.455: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 12 12:28:44.455: INFO: e2e test version: v1.18.2 May 12 12:28:44.456: INFO: kube-apiserver version: v1.18.2 May 12 12:28:44.456: INFO: >>> kubeConfig: /root/.kube/config May 12 12:28:44.462: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:28:44.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets May 12 12:28:44.516: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 12 12:28:44.570: INFO: Create a RollingUpdate DaemonSet May 12 12:28:44.573: INFO: Check that daemon pods launch on every node of the cluster May 12 12:28:44.583: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 12:28:44.599: INFO: Number of nodes with available pods: 0 May 12 12:28:44.599: INFO: Node kali-worker is running more than one daemon pod May 12 12:28:45.604: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 12:28:45.609: INFO: Number of nodes with available pods: 0 May 12 12:28:45.609: INFO: Node kali-worker is running more than one daemon pod May 12 12:28:46.603: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 12:28:46.606: INFO: Number of nodes with available pods: 0 May 12 12:28:46.606: INFO: Node kali-worker is running more than one daemon pod May 12 12:28:47.830: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 12:28:47.834: INFO: Number of nodes with available pods: 0 May 12 12:28:47.834: INFO: Node kali-worker is running more than one daemon pod May 12 12:28:48.612: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 12:28:48.616: INFO: Number of nodes with available pods: 1 May 12 12:28:48.616: INFO: Node kali-worker is running more than one daemon pod May 12 12:28:49.614: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 12:28:49.618: INFO: Number of nodes with available pods: 2 May 12 12:28:49.618: INFO: Number of running nodes: 2, number of available pods: 2 May 12 12:28:49.618: INFO: Update the DaemonSet to trigger a rollout May 12 12:28:49.626: INFO: Updating DaemonSet daemon-set May 12 12:29:03.736: INFO: Roll back the DaemonSet before rollout is complete May 12 12:29:03.811: INFO: Updating DaemonSet daemon-set May 12 12:29:03.811: INFO: Make sure DaemonSet rollback is complete May 12 12:29:03.849: INFO: Wrong image for pod: daemon-set-jks9b. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 12 12:29:03.849: INFO: Pod daemon-set-jks9b is not available May 12 12:29:03.859: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 12:29:04.862: INFO: Wrong image for pod: daemon-set-jks9b. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 12 12:29:04.862: INFO: Pod daemon-set-jks9b is not available May 12 12:29:04.866: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 12:29:05.864: INFO: Wrong image for pod: daemon-set-jks9b. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 12 12:29:05.864: INFO: Pod daemon-set-jks9b is not available May 12 12:29:05.867: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 12:29:07.216: INFO: Wrong image for pod: daemon-set-jks9b. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 12 12:29:07.216: INFO: Pod daemon-set-jks9b is not available May 12 12:29:07.236: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 12:29:07.861: INFO: Wrong image for pod: daemon-set-jks9b. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 12 12:29:07.861: INFO: Pod daemon-set-jks9b is not available May 12 12:29:07.864: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 12:29:08.914: INFO: Pod daemon-set-98zgk is not available May 12 12:29:08.949: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4438, will wait for the garbage collector to delete the pods May 12 12:29:09.347: INFO: Deleting DaemonSet.extensions daemon-set took: 117.872184ms May 12 12:29:09.647: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.30656ms May 12 12:29:23.850: INFO: Number of nodes with available pods: 0 May 12 12:29:23.851: INFO: Number of running nodes: 0, number of available pods: 0 May 12 12:29:23.855: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4438/daemonsets","resourceVersion":"3717518"},"items":null} May 12 12:29:23.859: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4438/pods","resourceVersion":"3717518"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:29:23.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4438" for this suite. • [SLOW TEST:39.409 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":1,"skipped":7,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:29:23.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 12:29:25.127: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 12:29:27.139: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883365, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883365, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883365, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883365, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 12:29:29.460: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883365, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883365, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883365, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883365, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 12:29:32.170: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:29:32.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7834" for this suite. STEP: Destroying namespace "webhook-7834-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.567 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":2,"skipped":9,"failed":0} [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:29:32.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 12 12:29:41.069: INFO: Successfully updated pod "adopt-release-96rb6" STEP: Checking that the Job readopts the Pod May 12 12:29:41.069: INFO: Waiting up to 15m0s for pod "adopt-release-96rb6" in namespace "job-6864" to be "adopted" May 12 12:29:41.081: INFO: Pod "adopt-release-96rb6": Phase="Running", Reason="", readiness=true. Elapsed: 11.532483ms May 12 12:29:43.094: INFO: Pod "adopt-release-96rb6": Phase="Running", Reason="", readiness=true. Elapsed: 2.024296283s May 12 12:29:43.094: INFO: Pod "adopt-release-96rb6" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 12 12:29:43.606: INFO: Successfully updated pod "adopt-release-96rb6" STEP: Checking that the Job releases the Pod May 12 12:29:43.606: INFO: Waiting up to 15m0s for pod "adopt-release-96rb6" in namespace "job-6864" to be "released" May 12 12:29:43.658: INFO: Pod "adopt-release-96rb6": Phase="Running", Reason="", readiness=true. Elapsed: 51.892816ms May 12 12:29:46.469: INFO: Pod "adopt-release-96rb6": Phase="Running", Reason="", readiness=true. Elapsed: 2.863344463s May 12 12:29:46.469: INFO: Pod "adopt-release-96rb6" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:29:46.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6864" for this suite. • [SLOW TEST:14.267 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":3,"skipped":9,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:29:46.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 12:29:48.372: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 12:29:50.384: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883388, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883388, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883388, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883388, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 12:29:52.453: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883388, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883388, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883388, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883388, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 12:29:55.411: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:29:55.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8923" for this suite. STEP: Destroying namespace "webhook-8923-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.188 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":4,"skipped":23,"failed":0} SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:29:55.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 12 12:30:08.292: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 12 12:30:08.328: INFO: Pod pod-with-prestop-http-hook still exists May 12 12:30:10.328: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 12 12:30:10.333: INFO: Pod pod-with-prestop-http-hook still exists May 12 12:30:12.328: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 12 12:30:12.333: INFO: Pod pod-with-prestop-http-hook still exists May 12 12:30:14.328: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 12 12:30:14.331: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:30:14.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6815" for this suite. • [SLOW TEST:18.463 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":5,"skipped":27,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:30:14.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-86507a1a-60f1-4aeb-b6da-ad4bdb9cdad5 STEP: Creating a pod to test consume configMaps May 12 12:30:14.617: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2029d001-7838-4880-9842-12ddd8ea4467" in namespace "projected-2107" to be "Succeeded or Failed" May 12 12:30:14.650: INFO: Pod "pod-projected-configmaps-2029d001-7838-4880-9842-12ddd8ea4467": Phase="Pending", Reason="", readiness=false. Elapsed: 32.806055ms May 12 12:30:16.734: INFO: Pod "pod-projected-configmaps-2029d001-7838-4880-9842-12ddd8ea4467": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11693008s May 12 12:30:18.737: INFO: Pod "pod-projected-configmaps-2029d001-7838-4880-9842-12ddd8ea4467": Phase="Running", Reason="", readiness=true. Elapsed: 4.119817401s May 12 12:30:20.740: INFO: Pod "pod-projected-configmaps-2029d001-7838-4880-9842-12ddd8ea4467": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.122585647s STEP: Saw pod success May 12 12:30:20.740: INFO: Pod "pod-projected-configmaps-2029d001-7838-4880-9842-12ddd8ea4467" satisfied condition "Succeeded or Failed" May 12 12:30:20.742: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-2029d001-7838-4880-9842-12ddd8ea4467 container projected-configmap-volume-test: STEP: delete the pod May 12 12:30:20.898: INFO: Waiting for pod pod-projected-configmaps-2029d001-7838-4880-9842-12ddd8ea4467 to disappear May 12 12:30:20.900: INFO: Pod pod-projected-configmaps-2029d001-7838-4880-9842-12ddd8ea4467 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:30:20.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2107" for this suite. • [SLOW TEST:6.549 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":6,"skipped":57,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:30:20.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-cdce2811-6f35-4508-8da2-5d56a98fe1e2 STEP: Creating a pod to test consume secrets May 12 12:30:21.100: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9fc1f5ad-3029-4e63-bb11-24576bff2031" in namespace "projected-8619" to be "Succeeded or Failed" May 12 12:30:21.227: INFO: Pod "pod-projected-secrets-9fc1f5ad-3029-4e63-bb11-24576bff2031": Phase="Pending", Reason="", readiness=false. Elapsed: 127.137761ms May 12 12:30:23.526: INFO: Pod "pod-projected-secrets-9fc1f5ad-3029-4e63-bb11-24576bff2031": Phase="Pending", Reason="", readiness=false. Elapsed: 2.42545381s May 12 12:30:25.969: INFO: Pod "pod-projected-secrets-9fc1f5ad-3029-4e63-bb11-24576bff2031": Phase="Pending", Reason="", readiness=false. Elapsed: 4.868431811s May 12 12:30:28.046: INFO: Pod "pod-projected-secrets-9fc1f5ad-3029-4e63-bb11-24576bff2031": Phase="Pending", Reason="", readiness=false. Elapsed: 6.945646729s May 12 12:30:30.050: INFO: Pod "pod-projected-secrets-9fc1f5ad-3029-4e63-bb11-24576bff2031": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.950109595s STEP: Saw pod success May 12 12:30:30.050: INFO: Pod "pod-projected-secrets-9fc1f5ad-3029-4e63-bb11-24576bff2031" satisfied condition "Succeeded or Failed" May 12 12:30:30.054: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-9fc1f5ad-3029-4e63-bb11-24576bff2031 container projected-secret-volume-test: STEP: delete the pod May 12 12:30:30.232: INFO: Waiting for pod pod-projected-secrets-9fc1f5ad-3029-4e63-bb11-24576bff2031 to disappear May 12 12:30:30.526: INFO: Pod pod-projected-secrets-9fc1f5ad-3029-4e63-bb11-24576bff2031 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:30:30.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8619" for this suite. • [SLOW TEST:9.738 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":7,"skipped":68,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:30:30.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 12 12:30:31.239: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-b244492a-2325-4801-93f6-692abf3b3fdd" in namespace "security-context-test-9105" to be "Succeeded or Failed" May 12 12:30:31.298: INFO: Pod "busybox-readonly-false-b244492a-2325-4801-93f6-692abf3b3fdd": Phase="Pending", Reason="", readiness=false. Elapsed: 59.531634ms May 12 12:30:33.867: INFO: Pod "busybox-readonly-false-b244492a-2325-4801-93f6-692abf3b3fdd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.627979395s May 12 12:30:35.878: INFO: Pod "busybox-readonly-false-b244492a-2325-4801-93f6-692abf3b3fdd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.639692977s May 12 12:30:37.882: INFO: Pod "busybox-readonly-false-b244492a-2325-4801-93f6-692abf3b3fdd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.642924004s May 12 12:30:37.882: INFO: Pod "busybox-readonly-false-b244492a-2325-4801-93f6-692abf3b3fdd" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:30:37.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9105" for this suite. • [SLOW TEST:7.345 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":8,"skipped":76,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:30:37.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-0f8a7657-b54b-4c89-8ea0-36cceb187e1f STEP: Creating a pod to test consume configMaps May 12 12:30:38.398: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e96d2aa0-ae22-44b8-a976-052c7b67f3b8" in namespace "projected-7355" to be "Succeeded or Failed" May 12 12:30:38.404: INFO: Pod "pod-projected-configmaps-e96d2aa0-ae22-44b8-a976-052c7b67f3b8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.391222ms May 12 12:30:40.408: INFO: Pod "pod-projected-configmaps-e96d2aa0-ae22-44b8-a976-052c7b67f3b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010219033s May 12 12:30:42.573: INFO: Pod "pod-projected-configmaps-e96d2aa0-ae22-44b8-a976-052c7b67f3b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.175268391s May 12 12:30:44.603: INFO: Pod "pod-projected-configmaps-e96d2aa0-ae22-44b8-a976-052c7b67f3b8": Phase="Running", Reason="", readiness=true. Elapsed: 6.204683644s May 12 12:30:46.606: INFO: Pod "pod-projected-configmaps-e96d2aa0-ae22-44b8-a976-052c7b67f3b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.208211673s STEP: Saw pod success May 12 12:30:46.606: INFO: Pod "pod-projected-configmaps-e96d2aa0-ae22-44b8-a976-052c7b67f3b8" satisfied condition "Succeeded or Failed" May 12 12:30:46.609: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-e96d2aa0-ae22-44b8-a976-052c7b67f3b8 container projected-configmap-volume-test: STEP: delete the pod May 12 12:30:46.628: INFO: Waiting for pod pod-projected-configmaps-e96d2aa0-ae22-44b8-a976-052c7b67f3b8 to disappear May 12 12:30:46.655: INFO: Pod pod-projected-configmaps-e96d2aa0-ae22-44b8-a976-052c7b67f3b8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:30:46.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7355" for this suite. • [SLOW TEST:8.672 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":9,"skipped":76,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:30:46.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:30:50.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5745" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":10,"skipped":105,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:30:50.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 12:30:51.527: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 12:30:53.535: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883451, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883451, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883451, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883451, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 12:30:55.538: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883451, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883451, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883451, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883451, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 12:30:58.591: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:30:59.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1696" for this suite. STEP: Destroying namespace "webhook-1696-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.453 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":11,"skipped":110,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:30:59.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 12:31:00.560: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 12:31:02.570: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883460, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883460, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883460, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883460, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 12:31:05.676: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:31:15.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2860" for this suite. STEP: Destroying namespace "webhook-2860-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.672 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":12,"skipped":114,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:31:15.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0512 12:31:30.150163 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 12:31:30.150: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:31:30.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-982" for this suite. • [SLOW TEST:15.005 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":13,"skipped":126,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:31:30.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 12 12:31:32.427: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5e50176f-19ba-43f7-80a5-f90d0b18d652" in namespace "projected-3634" to be "Succeeded or Failed" May 12 12:31:32.630: INFO: Pod "downwardapi-volume-5e50176f-19ba-43f7-80a5-f90d0b18d652": Phase="Pending", Reason="", readiness=false. Elapsed: 201.995173ms May 12 12:31:34.633: INFO: Pod "downwardapi-volume-5e50176f-19ba-43f7-80a5-f90d0b18d652": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205237808s May 12 12:31:36.663: INFO: Pod "downwardapi-volume-5e50176f-19ba-43f7-80a5-f90d0b18d652": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.235393291s STEP: Saw pod success May 12 12:31:36.663: INFO: Pod "downwardapi-volume-5e50176f-19ba-43f7-80a5-f90d0b18d652" satisfied condition "Succeeded or Failed" May 12 12:31:36.689: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-5e50176f-19ba-43f7-80a5-f90d0b18d652 container client-container: STEP: delete the pod May 12 12:31:37.026: INFO: Waiting for pod downwardapi-volume-5e50176f-19ba-43f7-80a5-f90d0b18d652 to disappear May 12 12:31:37.055: INFO: Pod downwardapi-volume-5e50176f-19ba-43f7-80a5-f90d0b18d652 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:31:37.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3634" for this suite. • [SLOW TEST:6.176 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":14,"skipped":141,"failed":0} SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:31:37.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's args May 12 12:31:37.277: INFO: Waiting up to 5m0s for pod "var-expansion-a779f1d9-cd4f-40f0-b8a7-39866c608cb5" in namespace "var-expansion-8690" to be "Succeeded or Failed" May 12 12:31:37.310: INFO: Pod "var-expansion-a779f1d9-cd4f-40f0-b8a7-39866c608cb5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.559253ms May 12 12:31:39.592: INFO: Pod "var-expansion-a779f1d9-cd4f-40f0-b8a7-39866c608cb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315110686s May 12 12:31:41.675: INFO: Pod "var-expansion-a779f1d9-cd4f-40f0-b8a7-39866c608cb5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.397612357s May 12 12:31:43.707: INFO: Pod "var-expansion-a779f1d9-cd4f-40f0-b8a7-39866c608cb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.429594482s STEP: Saw pod success May 12 12:31:43.707: INFO: Pod "var-expansion-a779f1d9-cd4f-40f0-b8a7-39866c608cb5" satisfied condition "Succeeded or Failed" May 12 12:31:43.709: INFO: Trying to get logs from node kali-worker pod var-expansion-a779f1d9-cd4f-40f0-b8a7-39866c608cb5 container dapi-container: STEP: delete the pod May 12 12:31:43.762: INFO: Waiting for pod var-expansion-a779f1d9-cd4f-40f0-b8a7-39866c608cb5 to disappear May 12 12:31:44.184: INFO: Pod var-expansion-a779f1d9-cd4f-40f0-b8a7-39866c608cb5 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:31:44.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8690" for this suite. • [SLOW TEST:7.098 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":15,"skipped":148,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:31:44.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:31:55.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6384" for this suite. • [SLOW TEST:11.804 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":16,"skipped":219,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:31:55.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:31:56.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8436" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":17,"skipped":252,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:31:56.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override arguments May 12 12:31:56.613: INFO: Waiting up to 5m0s for pod "client-containers-79754b8a-ff6e-4254-88ca-377baffe5795" in namespace "containers-6720" to be "Succeeded or Failed" May 12 12:31:56.634: INFO: Pod "client-containers-79754b8a-ff6e-4254-88ca-377baffe5795": Phase="Pending", Reason="", readiness=false. Elapsed: 20.215546ms May 12 12:31:58.912: INFO: Pod "client-containers-79754b8a-ff6e-4254-88ca-377baffe5795": Phase="Pending", Reason="", readiness=false. Elapsed: 2.298274003s May 12 12:32:00.963: INFO: Pod "client-containers-79754b8a-ff6e-4254-88ca-377baffe5795": Phase="Running", Reason="", readiness=true. Elapsed: 4.349625446s May 12 12:32:02.968: INFO: Pod "client-containers-79754b8a-ff6e-4254-88ca-377baffe5795": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.354307374s STEP: Saw pod success May 12 12:32:02.968: INFO: Pod "client-containers-79754b8a-ff6e-4254-88ca-377baffe5795" satisfied condition "Succeeded or Failed" May 12 12:32:02.971: INFO: Trying to get logs from node kali-worker pod client-containers-79754b8a-ff6e-4254-88ca-377baffe5795 container test-container: STEP: delete the pod May 12 12:32:03.009: INFO: Waiting for pod client-containers-79754b8a-ff6e-4254-88ca-377baffe5795 to disappear May 12 12:32:03.019: INFO: Pod client-containers-79754b8a-ff6e-4254-88ca-377baffe5795 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:32:03.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6720" for this suite. • [SLOW TEST:6.598 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":18,"skipped":267,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:32:03.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium May 12 12:32:03.273: INFO: Waiting up to 5m0s for pod "pod-dac3c255-306b-4091-815a-9fcefdcd1b46" in namespace "emptydir-960" to be "Succeeded or Failed" May 12 12:32:03.285: INFO: Pod "pod-dac3c255-306b-4091-815a-9fcefdcd1b46": Phase="Pending", Reason="", readiness=false. Elapsed: 11.206487ms May 12 12:32:05.288: INFO: Pod "pod-dac3c255-306b-4091-815a-9fcefdcd1b46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014832191s May 12 12:32:07.292: INFO: Pod "pod-dac3c255-306b-4091-815a-9fcefdcd1b46": Phase="Running", Reason="", readiness=true. Elapsed: 4.019167711s May 12 12:32:09.297: INFO: Pod "pod-dac3c255-306b-4091-815a-9fcefdcd1b46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0236965s STEP: Saw pod success May 12 12:32:09.297: INFO: Pod "pod-dac3c255-306b-4091-815a-9fcefdcd1b46" satisfied condition "Succeeded or Failed" May 12 12:32:09.300: INFO: Trying to get logs from node kali-worker2 pod pod-dac3c255-306b-4091-815a-9fcefdcd1b46 container test-container: STEP: delete the pod May 12 12:32:09.329: INFO: Waiting for pod pod-dac3c255-306b-4091-815a-9fcefdcd1b46 to disappear May 12 12:32:09.343: INFO: Pod pod-dac3c255-306b-4091-815a-9fcefdcd1b46 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:32:09.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-960" for this suite. • [SLOW TEST:6.297 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":19,"skipped":306,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:32:09.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:32:09.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2491" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":20,"skipped":319,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:32:09.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4403.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4403.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 12:32:17.670: INFO: DNS probes using dns-4403/dns-test-c2937428-bb8c-45fa-9263-574e0a8ddce4 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:32:17.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4403" for this suite. • [SLOW TEST:9.051 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":275,"completed":21,"skipped":335,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:32:18.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium May 12 12:32:19.587: INFO: Waiting up to 5m0s for pod "pod-ffba9ef8-0bdc-49b2-951c-90e420f0f02d" in namespace "emptydir-5874" to be "Succeeded or Failed" May 12 12:32:19.625: INFO: Pod "pod-ffba9ef8-0bdc-49b2-951c-90e420f0f02d": Phase="Pending", Reason="", readiness=false. Elapsed: 38.327102ms May 12 12:32:21.777: INFO: Pod "pod-ffba9ef8-0bdc-49b2-951c-90e420f0f02d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190609659s May 12 12:32:23.915: INFO: Pod "pod-ffba9ef8-0bdc-49b2-951c-90e420f0f02d": Phase="Running", Reason="", readiness=true. Elapsed: 4.328480803s May 12 12:32:25.920: INFO: Pod "pod-ffba9ef8-0bdc-49b2-951c-90e420f0f02d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.333364147s STEP: Saw pod success May 12 12:32:25.920: INFO: Pod "pod-ffba9ef8-0bdc-49b2-951c-90e420f0f02d" satisfied condition "Succeeded or Failed" May 12 12:32:25.923: INFO: Trying to get logs from node kali-worker pod pod-ffba9ef8-0bdc-49b2-951c-90e420f0f02d container test-container: STEP: delete the pod May 12 12:32:25.992: INFO: Waiting for pod pod-ffba9ef8-0bdc-49b2-951c-90e420f0f02d to disappear May 12 12:32:26.028: INFO: Pod pod-ffba9ef8-0bdc-49b2-951c-90e420f0f02d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:32:26.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5874" for this suite. • [SLOW TEST:7.434 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":22,"skipped":350,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:32:26.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-e99eaba7-5fdb-4122-a7c4-3963d31e91c4 STEP: Creating a pod to test consume secrets May 12 12:32:26.215: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f94e83bc-9f14-4e1e-b13a-51ee8ac62d69" in namespace "projected-2720" to be "Succeeded or Failed" May 12 12:32:26.239: INFO: Pod "pod-projected-secrets-f94e83bc-9f14-4e1e-b13a-51ee8ac62d69": Phase="Pending", Reason="", readiness=false. Elapsed: 23.781306ms May 12 12:32:28.322: INFO: Pod "pod-projected-secrets-f94e83bc-9f14-4e1e-b13a-51ee8ac62d69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106579043s May 12 12:32:30.327: INFO: Pod "pod-projected-secrets-f94e83bc-9f14-4e1e-b13a-51ee8ac62d69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111282765s STEP: Saw pod success May 12 12:32:30.327: INFO: Pod "pod-projected-secrets-f94e83bc-9f14-4e1e-b13a-51ee8ac62d69" satisfied condition "Succeeded or Failed" May 12 12:32:30.330: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-f94e83bc-9f14-4e1e-b13a-51ee8ac62d69 container projected-secret-volume-test: STEP: delete the pod May 12 12:32:30.350: INFO: Waiting for pod pod-projected-secrets-f94e83bc-9f14-4e1e-b13a-51ee8ac62d69 to disappear May 12 12:32:30.386: INFO: Pod pod-projected-secrets-f94e83bc-9f14-4e1e-b13a-51ee8ac62d69 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:32:30.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2720" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":23,"skipped":373,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:32:30.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:32:30.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-8500" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":24,"skipped":393,"failed":0} SSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:32:30.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 12 12:32:30.950: INFO: Creating deployment "webserver-deployment" May 12 12:32:30.960: INFO: Waiting for observed generation 1 May 12 12:32:33.030: INFO: Waiting for all required pods to come up May 12 12:32:33.038: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 12 12:32:45.482: INFO: Waiting for deployment "webserver-deployment" to complete May 12 12:32:45.489: INFO: Updating deployment "webserver-deployment" with a non-existent image May 12 12:32:45.496: INFO: Updating deployment webserver-deployment May 12 12:32:45.496: INFO: Waiting for observed generation 2 May 12 12:32:47.658: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 12 12:32:48.107: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 12 12:32:48.110: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 12 12:32:48.118: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 12 12:32:48.118: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 12 12:32:48.120: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 12 12:32:48.124: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 12 12:32:48.124: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 12 12:32:48.130: INFO: Updating deployment webserver-deployment May 12 12:32:48.130: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 12 12:32:48.945: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 12 12:32:48.949: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 May 12 12:32:52.071: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-8934 /apis/apps/v1/namespaces/deployment-8934/deployments/webserver-deployment 007e21e0-0d5e-47cb-a723-8bbe58303141 3719316 3 2020-05-12 12:32:30 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-12 12:32:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-12 12:32:50 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f9fff8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-12 12:32:48 +0000 UTC,LastTransitionTime:2020-05-12 12:32:48 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-05-12 12:32:50 +0000 UTC,LastTransitionTime:2020-05-12 12:32:30 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 12 12:32:52.149: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4 deployment-8934 /apis/apps/v1/namespaces/deployment-8934/replicasets/webserver-deployment-6676bcd6d4 26de9c63-372e-45a7-b0e5-e8cdcb9a82b0 3719300 3 2020-05-12 12:32:45 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 007e21e0-0d5e-47cb-a723-8bbe58303141 0xc003021897 0xc003021898}] [] [{kube-controller-manager Update apps/v1 2020-05-12 12:32:50 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 48 55 101 50 49 101 48 45 48 100 53 101 45 52 55 99 98 45 97 55 50 51 45 56 98 98 101 53 56 51 48 51 49 52 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003021918 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 12 12:32:52.149: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 12 12:32:52.149: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797 deployment-8934 /apis/apps/v1/namespaces/deployment-8934/replicasets/webserver-deployment-84855cf797 08939ca0-8944-4fa1-bf76-5c39c216814f 3719309 3 2020-05-12 12:32:30 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 007e21e0-0d5e-47cb-a723-8bbe58303141 0xc003021977 0xc003021978}] [] [{kube-controller-manager Update apps/v1 2020-05-12 12:32:50 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 48 55 101 50 49 101 48 45 48 100 53 101 45 52 55 99 98 45 97 55 50 51 45 56 98 98 101 53 56 51 48 51 49 52 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0030219e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 12 12:32:53.448: INFO: Pod "webserver-deployment-6676bcd6d4-2sghx" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-2sghx webserver-deployment-6676bcd6d4- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-6676bcd6d4-2sghx 6eef98e8-2a5c-44ae-bf9d-49be80b8e4a8 3719204 0 2020-05-12 12:32:45 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 26de9c63-372e-45a7-b0e5-e8cdcb9a82b0 0xc0032e0460 0xc0032e0461}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 54 100 101 57 99 54 51 45 51 55 50 101 45 52 53 97 55 45 98 48 101 53 45 101 56 99 100 99 98 57 97 56 50 98 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:32:45 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-12 12:32:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.448: INFO: Pod "webserver-deployment-6676bcd6d4-4dcxj" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-4dcxj webserver-deployment-6676bcd6d4- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-6676bcd6d4-4dcxj 5c9f6899-5d84-49ca-9de0-ac8ff4b3b555 3719326 0 2020-05-12 12:32:48 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 26de9c63-372e-45a7-b0e5-e8cdcb9a82b0 0xc0032e0607 0xc0032e0608}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 54 100 101 57 99 54 51 45 51 55 50 101 45 52 53 97 55 45 98 48 101 53 45 101 56 99 100 99 98 57 97 56 50 98 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:32:50 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-12 12:32:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.449: INFO: Pod "webserver-deployment-6676bcd6d4-77lkt" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-77lkt webserver-deployment-6676bcd6d4- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-6676bcd6d4-77lkt 1a6223ee-678b-4ae2-9951-937bb5c49e8a 3719291 0 2020-05-12 12:32:49 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 26de9c63-372e-45a7-b0e5-e8cdcb9a82b0 0xc0032e07b7 0xc0032e07b8}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 54 100 101 57 99 54 51 45 51 55 50 101 45 52 53 97 55 45 98 48 101 53 45 101 56 99 100 99 98 57 97 56 50 98 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.449: INFO: Pod "webserver-deployment-6676bcd6d4-bn6n4" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-bn6n4 webserver-deployment-6676bcd6d4- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-6676bcd6d4-bn6n4 42391e85-1f17-49b1-83ee-221168ebb174 3719289 0 2020-05-12 12:32:49 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 26de9c63-372e-45a7-b0e5-e8cdcb9a82b0 0xc0032e08f7 0xc0032e08f8}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 54 100 101 57 99 54 51 45 51 55 50 101 45 52 53 97 55 45 98 48 101 53 45 101 56 99 100 99 98 57 97 56 50 98 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.449: INFO: Pod "webserver-deployment-6676bcd6d4-c4mq4" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-c4mq4 webserver-deployment-6676bcd6d4- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-6676bcd6d4-c4mq4 dcc91b09-24ab-44a7-8cb6-c51c9c33c932 3719358 0 2020-05-12 12:32:49 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 26de9c63-372e-45a7-b0e5-e8cdcb9a82b0 0xc0032e0a37 0xc0032e0a38}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 54 100 101 57 99 54 51 45 51 55 50 101 45 52 53 97 55 45 98 48 101 53 45 101 56 99 100 99 98 57 97 56 50 98 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:32:52 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-12 12:32:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.450: INFO: Pod "webserver-deployment-6676bcd6d4-km7dh" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-km7dh webserver-deployment-6676bcd6d4- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-6676bcd6d4-km7dh d8d44ae0-a50f-479f-bdc9-3262bfca9f2f 3719345 0 2020-05-12 12:32:49 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 26de9c63-372e-45a7-b0e5-e8cdcb9a82b0 0xc0032e0be7 0xc0032e0be8}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 54 100 101 57 99 54 51 45 51 55 50 101 45 52 53 97 55 45 98 48 101 53 45 101 56 99 100 99 98 57 97 56 50 98 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:32:51 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-12 12:32:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.450: INFO: Pod "webserver-deployment-6676bcd6d4-lnhbt" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-lnhbt webserver-deployment-6676bcd6d4- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-6676bcd6d4-lnhbt 440384da-0251-47d8-9a89-2bec4ecede73 3719317 0 2020-05-12 12:32:48 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 26de9c63-372e-45a7-b0e5-e8cdcb9a82b0 0xc0032e0d97 0xc0032e0d98}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 54 100 101 57 99 54 51 45 51 55 50 101 45 52 53 97 55 45 98 48 101 53 45 101 56 99 100 99 98 57 97 56 50 98 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:32:50 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-12 12:32:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.451: INFO: Pod "webserver-deployment-6676bcd6d4-m6vps" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-m6vps webserver-deployment-6676bcd6d4- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-6676bcd6d4-m6vps 1a368e62-e479-4dfa-9502-1d41ab8f0593 3719214 0 2020-05-12 12:32:45 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 26de9c63-372e-45a7-b0e5-e8cdcb9a82b0 0xc0032e0f47 0xc0032e0f48}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 54 100 101 57 99 54 51 45 51 55 50 101 45 52 53 97 55 45 98 48 101 53 45 101 56 99 100 99 98 57 97 56 50 98 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:32:45 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-12 12:32:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.451: INFO: Pod "webserver-deployment-6676bcd6d4-nfcbm" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-nfcbm webserver-deployment-6676bcd6d4- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-6676bcd6d4-nfcbm 234357e9-ed4e-48db-9a52-b8e8fdfa406f 3719231 0 2020-05-12 12:32:45 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 26de9c63-372e-45a7-b0e5-e8cdcb9a82b0 0xc0032e10f7 0xc0032e10f8}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 54 100 101 57 99 54 51 45 51 55 50 101 45 52 53 97 55 45 98 48 101 53 45 101 56 99 100 99 98 57 97 56 50 98 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:32:46 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-12 12:32:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.451: INFO: Pod "webserver-deployment-6676bcd6d4-pprxc" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-pprxc webserver-deployment-6676bcd6d4- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-6676bcd6d4-pprxc bf7fa505-4c2b-4a3e-b01f-2ab664d8ef5d 3719296 0 2020-05-12 12:32:49 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 26de9c63-372e-45a7-b0e5-e8cdcb9a82b0 0xc0032e12a7 0xc0032e12a8}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 54 100 101 57 99 54 51 45 51 55 50 101 45 52 53 97 55 45 98 48 101 53 45 101 56 99 100 99 98 57 97 56 50 98 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.451: INFO: Pod "webserver-deployment-6676bcd6d4-t66sc" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-t66sc webserver-deployment-6676bcd6d4- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-6676bcd6d4-t66sc d6017abc-1f2a-47d3-a099-f5e4a0410a59 3719235 0 2020-05-12 12:32:45 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 26de9c63-372e-45a7-b0e5-e8cdcb9a82b0 0xc0032e13e7 0xc0032e13e8}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 54 100 101 57 99 54 51 45 51 55 50 101 45 52 53 97 55 45 98 48 101 53 45 101 56 99 100 99 98 57 97 56 50 98 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:32:47 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-12 12:32:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.452: INFO: Pod "webserver-deployment-6676bcd6d4-vjjd7" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-vjjd7 webserver-deployment-6676bcd6d4- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-6676bcd6d4-vjjd7 c12825e5-d8d1-4781-8783-11dee0029e76 3719351 0 2020-05-12 12:32:48 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 26de9c63-372e-45a7-b0e5-e8cdcb9a82b0 0xc0032e1597 0xc0032e1598}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 54 100 101 57 99 54 51 45 51 55 50 101 45 52 53 97 55 45 98 48 101 53 45 101 56 99 100 99 98 57 97 56 50 98 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:32:51 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-12 12:32:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.452: INFO: Pod "webserver-deployment-6676bcd6d4-x2rnb" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-x2rnb webserver-deployment-6676bcd6d4- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-6676bcd6d4-x2rnb c167c0b7-fab3-478e-b102-2ab5f49188c8 3719334 0 2020-05-12 12:32:45 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 26de9c63-372e-45a7-b0e5-e8cdcb9a82b0 0xc0032e1747 0xc0032e1748}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 54 100 101 57 99 54 51 45 51 55 50 101 45 52 53 97 55 45 98 48 101 53 45 101 56 99 100 99 98 57 97 56 50 98 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:32:51 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.12,StartTime:2020-05-12 12:32:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.12,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.453: INFO: Pod "webserver-deployment-84855cf797-56bk8" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-56bk8 webserver-deployment-84855cf797- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-84855cf797-56bk8 2e114a70-ec61-41ea-b5c9-ade612bd405e 3719136 0 2020-05-12 12:32:31 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 08939ca0-8944-4fa1-bf76-5c39c216814f 0xc0032e1927 0xc0032e1928}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 56 57 51 57 99 97 48 45 56 57 52 52 45 52 102 97 49 45 98 102 55 54 45 53 99 51 57 99 50 49 54 56 49 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:32:43 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.9,StartTime:2020-05-12 12:32:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 12:32:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cf6587a8d2d4a58810c894d1c64639e72628a7e6a9b5b152ffe416c12b9d37b7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.9,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.453: INFO: Pod "webserver-deployment-84855cf797-5nzhd" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-5nzhd webserver-deployment-84855cf797- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-84855cf797-5nzhd c589c791-435c-443c-8062-475804b184f8 3719318 0 2020-05-12 12:32:48 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 08939ca0-8944-4fa1-bf76-5c39c216814f 0xc0032e1ad7 0xc0032e1ad8}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 56 57 51 57 99 97 48 45 56 57 52 52 45 52 102 97 49 45 98 102 55 54 45 53 99 51 57 99 50 49 54 56 49 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:32:50 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-12 12:32:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.454: INFO: Pod "webserver-deployment-84855cf797-6m26s" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-6m26s webserver-deployment-84855cf797- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-84855cf797-6m26s 6d3bd0d9-2984-4054-9782-976b9e1f174a 3719287 0 2020-05-12 12:32:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 08939ca0-8944-4fa1-bf76-5c39c216814f 0xc0032e1c67 0xc0032e1c68}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 56 57 51 57 99 97 48 45 56 57 52 52 45 52 102 97 49 45 98 102 55 54 45 53 99 51 57 99 50 49 54 56 49 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.454: INFO: Pod "webserver-deployment-84855cf797-6vwhv" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-6vwhv webserver-deployment-84855cf797- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-84855cf797-6vwhv aa86a1e1-64ad-43c9-abf8-610f492cad5b 3719356 0 2020-05-12 12:32:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 08939ca0-8944-4fa1-bf76-5c39c216814f 0xc0032e1d97 0xc0032e1d98}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 56 57 51 57 99 97 48 45 56 57 52 52 45 52 102 97 49 45 98 102 55 54 45 53 99 51 57 99 50 49 54 56 49 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:32:52 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-12 12:32:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.454: INFO: Pod "webserver-deployment-84855cf797-97dx7" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-97dx7 webserver-deployment-84855cf797- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-84855cf797-97dx7 62f42d92-7697-44f8-a0b0-575b3fe8f5e0 3719340 0 2020-05-12 12:32:48 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 08939ca0-8944-4fa1-bf76-5c39c216814f 0xc0032e1f27 0xc0032e1f28}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 56 57 51 57 99 97 48 45 56 57 52 52 45 52 102 97 49 45 98 102 55 54 45 53 99 51 57 99 50 49 54 56 49 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:32:51 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-12 12:32:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.455: INFO: Pod "webserver-deployment-84855cf797-cbj2n" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-cbj2n webserver-deployment-84855cf797- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-84855cf797-cbj2n 62e1002f-3f7b-4d72-a6d7-989a62cc1384 3719127 0 2020-05-12 12:32:31 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 08939ca0-8944-4fa1-bf76-5c39c216814f 0xc00340a0b7 0xc00340a0b8}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 56 57 51 57 99 97 48 45 56 57 52 52 45 52 102 97 49 45 98 102 55 54 45 53 99 51 57 99 50 49 54 56 49 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:32:42 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.7,StartTime:2020-05-12 12:32:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 12:32:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://bb3cc940e236caf78e76d14c77dabb42facee8d7518cc9df8ca5111533e285cf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.455: INFO: Pod "webserver-deployment-84855cf797-cpm9r" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-cpm9r webserver-deployment-84855cf797- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-84855cf797-cpm9r df21f74a-47db-475a-8294-b5a0f34ba552 3719302 0 2020-05-12 12:32:48 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 08939ca0-8944-4fa1-bf76-5c39c216814f 0xc00340a267 0xc00340a268}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 56 57 51 57 99 97 48 45 56 57 52 52 45 52 102 97 49 45 98 102 55 54 45 53 99 51 57 99 50 49 54 56 49 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:32:50 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-12 12:32:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.455: INFO: Pod "webserver-deployment-84855cf797-dmzrr" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-dmzrr webserver-deployment-84855cf797- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-84855cf797-dmzrr 9b59e4b1-f6ff-4dcf-bf63-fc5b41cfe1ab 3719325 0 2020-05-12 12:32:48 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 08939ca0-8944-4fa1-bf76-5c39c216814f 0xc00340a3f7 0xc00340a3f8}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 56 57 51 57 99 97 48 45 56 57 52 52 45 52 102 97 49 45 98 102 55 54 45 53 99 51 57 99 50 49 54 56 49 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:32:50 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-12 12:32:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.455: INFO: Pod "webserver-deployment-84855cf797-dn2bc" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-dn2bc webserver-deployment-84855cf797- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-84855cf797-dn2bc 77c46879-1765-4df0-8085-2846e13e539f 3719352 0 2020-05-12 12:32:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 08939ca0-8944-4fa1-bf76-5c39c216814f 0xc00340a587 0xc00340a588}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 56 57 51 57 99 97 48 45 56 57 52 52 45 52 102 97 49 45 98 102 55 54 45 53 99 51 57 99 50 49 54 56 49 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:32:51 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-12 12:32:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.456: INFO: Pod "webserver-deployment-84855cf797-fclbk" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-fclbk webserver-deployment-84855cf797- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-84855cf797-fclbk b48b56e5-cd6e-489c-9e78-4784afce989f 3719169 0 2020-05-12 12:32:31 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 08939ca0-8944-4fa1-bf76-5c39c216814f 0xc00340a717 0xc00340a718}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 56 57 51 57 99 97 48 45 56 57 52 52 45 52 102 97 49 45 98 102 55 54 45 53 99 51 57 99 50 49 54 56 49 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:32:44 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 48 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.200,StartTime:2020-05-12 12:32:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 12:32:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://15b95352c8b09286a351945b098ff5ce99e5d7a69da2670da46745c91ddda85b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.200,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.456: INFO: Pod "webserver-deployment-84855cf797-fl8lq" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-fl8lq webserver-deployment-84855cf797- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-84855cf797-fl8lq 30ebc46f-eead-47cf-a7df-fba682fda98d 3719165 0 2020-05-12 12:32:31 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 08939ca0-8944-4fa1-bf76-5c39c216814f 0xc00340a8c7 0xc00340a8c8}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 56 57 51 57 99 97 48 45 56 57 52 52 45 52 102 97 49 45 98 102 55 54 45 53 99 51 57 99 50 49 54 56 49 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:32:44 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.10,StartTime:2020-05-12 12:32:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 12:32:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0e3118f9442e1c2c6be54233d3df0d1f98aa9549b3414e776f8ce766f219fc16,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.10,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.456: INFO: Pod "webserver-deployment-84855cf797-gz8wl" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-gz8wl webserver-deployment-84855cf797- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-84855cf797-gz8wl d0b91952-f3b4-4ae6-a39d-0e4ca8b4deab 3719299 0 2020-05-12 12:32:48 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 08939ca0-8944-4fa1-bf76-5c39c216814f 0xc00340aa77 0xc00340aa78}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 56 57 51 57 99 97 48 45 56 57 52 52 45 52 102 97 49 45 98 102 55 54 45 53 99 51 57 99 50 49 54 56 49 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:32:50 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-12 12:32:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.456: INFO: Pod "webserver-deployment-84855cf797-hwxml" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-hwxml webserver-deployment-84855cf797- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-84855cf797-hwxml 9f23ea47-1ce3-4294-9202-5bef9ee86fee 3719292 0 2020-05-12 12:32:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 08939ca0-8944-4fa1-bf76-5c39c216814f 0xc00340ac07 0xc00340ac08}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 56 57 51 57 99 97 48 45 56 57 52 52 45 52 102 97 49 45 98 102 55 54 45 53 99 51 57 99 50 49 54 56 49 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.456: INFO: Pod "webserver-deployment-84855cf797-k9lpn" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-k9lpn webserver-deployment-84855cf797- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-84855cf797-k9lpn 0906b901-ff1f-46fd-beef-5d0493ce3eac 3719290 0 2020-05-12 12:32:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 08939ca0-8944-4fa1-bf76-5c39c216814f 0xc00340ad37 0xc00340ad38}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 56 57 51 57 99 97 48 45 56 57 52 52 45 52 102 97 49 45 98 102 55 54 45 53 99 51 57 99 50 49 54 56 49 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.457: INFO: Pod "webserver-deployment-84855cf797-kjvlx" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-kjvlx webserver-deployment-84855cf797- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-84855cf797-kjvlx 357f7844-b096-48d9-88e5-3dad3ef4dec2 3719171 0 2020-05-12 12:32:31 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 08939ca0-8944-4fa1-bf76-5c39c216814f 0xc00340ae67 0xc00340ae68}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 56 57 51 57 99 97 48 45 56 57 52 52 45 52 102 97 49 45 98 102 55 54 45 53 99 51 57 99 50 49 54 56 49 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:32:44 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.11,StartTime:2020-05-12 12:32:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 12:32:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4f690a0b59662ceaffd1038a8d2acfab069d777510bb147e42beab13e4616d05,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.11,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.458: INFO: Pod "webserver-deployment-84855cf797-mnzsc" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-mnzsc webserver-deployment-84855cf797- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-84855cf797-mnzsc 7f9f5905-47c1-4673-b2cf-30d41b98dd77 3719333 0 2020-05-12 12:32:48 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 08939ca0-8944-4fa1-bf76-5c39c216814f 0xc00340b017 0xc00340b018}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 56 57 51 57 99 97 48 45 56 57 52 52 45 52 102 97 49 45 98 102 55 54 45 53 99 51 57 99 50 49 54 56 49 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:32:51 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-12 12:32:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.459: INFO: Pod "webserver-deployment-84855cf797-pqgcd" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-pqgcd webserver-deployment-84855cf797- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-84855cf797-pqgcd 71790921-31a8-4da5-9625-ab2f850b2af0 3719160 0 2020-05-12 12:32:31 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 08939ca0-8944-4fa1-bf76-5c39c216814f 0xc00340b1a7 0xc00340b1a8}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 56 57 51 57 99 97 48 45 56 57 52 52 45 52 102 97 49 45 98 102 55 54 45 53 99 51 57 99 50 49 54 56 49 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:32:44 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 48 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.201,StartTime:2020-05-12 12:32:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 12:32:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://fed438548400644d8d2cd6a0e4f2df19dcb8a176b2cb9df07248c1e8b4b6a1f1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.201,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.459: INFO: Pod "webserver-deployment-84855cf797-qxj65" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-qxj65 webserver-deployment-84855cf797- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-84855cf797-qxj65 cd571adb-003b-4f53-a9ae-9567fc8222c1 3719149 0 2020-05-12 12:32:31 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 08939ca0-8944-4fa1-bf76-5c39c216814f 0xc00340b357 0xc00340b358}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 56 57 51 57 99 97 48 45 56 57 52 52 45 52 102 97 49 45 98 102 55 54 45 53 99 51 57 99 50 49 54 56 49 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:32:43 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.8,StartTime:2020-05-12 12:32:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 12:32:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://68daba25bcb245d1f82da9f768840299d4e6d3b4dccf32b61ef432096d5a0f70,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.8,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.465: INFO: Pod "webserver-deployment-84855cf797-t4dl5" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-t4dl5 webserver-deployment-84855cf797- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-84855cf797-t4dl5 20cdfa4d-f6da-4ac2-9c56-865d0711d353 3719107 0 2020-05-12 12:32:31 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 08939ca0-8944-4fa1-bf76-5c39c216814f 0xc00340b507 0xc00340b508}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:30 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 56 57 51 57 99 97 48 45 56 57 52 52 45 52 102 97 49 45 98 102 55 54 45 53 99 51 57 99 50 49 54 56 49 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:32:38 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 57 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.197,StartTime:2020-05-12 12:32:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 12:32:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://952bd6deab82c7032fddd013d73996486aa914a2448ac48aa8016b244162d082,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.197,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:32:53.465: INFO: Pod "webserver-deployment-84855cf797-zt6fs" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-zt6fs webserver-deployment-84855cf797- deployment-8934 /api/v1/namespaces/deployment-8934/pods/webserver-deployment-84855cf797-zt6fs 5b0b2a85-8d22-4a82-89c9-2fb1d4610ead 3719342 0 2020-05-12 12:32:48 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 08939ca0-8944-4fa1-bf76-5c39c216814f 0xc00340b6b7 0xc00340b6b8}] [] [{kube-controller-manager Update v1 2020-05-12 12:32:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 56 57 51 57 99 97 48 45 56 57 52 52 45 52 102 97 49 45 98 102 55 54 45 53 99 51 57 99 50 49 54 56 49 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:32:51 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9frgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9frgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9frgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:32:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-12 12:32:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:32:53.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8934" for this suite. • [SLOW TEST:24.685 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":25,"skipped":397,"failed":0} SSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:32:55.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:32:57.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7026" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":26,"skipped":405,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:32:57.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 12 12:32:57.886: INFO: Waiting up to 5m0s for pod "downwardapi-volume-054e8035-f11a-4a00-b3df-00d908f3682e" in namespace "downward-api-99" to be "Succeeded or Failed" May 12 12:32:58.071: INFO: Pod "downwardapi-volume-054e8035-f11a-4a00-b3df-00d908f3682e": Phase="Pending", Reason="", readiness=false. Elapsed: 185.033419ms May 12 12:33:00.119: INFO: Pod "downwardapi-volume-054e8035-f11a-4a00-b3df-00d908f3682e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232870733s May 12 12:33:02.572: INFO: Pod "downwardapi-volume-054e8035-f11a-4a00-b3df-00d908f3682e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.685515189s May 12 12:33:04.792: INFO: Pod "downwardapi-volume-054e8035-f11a-4a00-b3df-00d908f3682e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.906328759s May 12 12:33:07.011: INFO: Pod "downwardapi-volume-054e8035-f11a-4a00-b3df-00d908f3682e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.124349968s May 12 12:33:09.604: INFO: Pod "downwardapi-volume-054e8035-f11a-4a00-b3df-00d908f3682e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.717862056s May 12 12:33:11.695: INFO: Pod "downwardapi-volume-054e8035-f11a-4a00-b3df-00d908f3682e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.808550538s May 12 12:33:14.078: INFO: Pod "downwardapi-volume-054e8035-f11a-4a00-b3df-00d908f3682e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.191769644s May 12 12:33:16.083: INFO: Pod "downwardapi-volume-054e8035-f11a-4a00-b3df-00d908f3682e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.197041676s May 12 12:33:18.275: INFO: Pod "downwardapi-volume-054e8035-f11a-4a00-b3df-00d908f3682e": Phase="Pending", Reason="", readiness=false. Elapsed: 20.388364846s May 12 12:33:20.498: INFO: Pod "downwardapi-volume-054e8035-f11a-4a00-b3df-00d908f3682e": Phase="Pending", Reason="", readiness=false. Elapsed: 22.612284248s May 12 12:33:22.568: INFO: Pod "downwardapi-volume-054e8035-f11a-4a00-b3df-00d908f3682e": Phase="Running", Reason="", readiness=true. Elapsed: 24.682000759s May 12 12:33:24.738: INFO: Pod "downwardapi-volume-054e8035-f11a-4a00-b3df-00d908f3682e": Phase="Running", Reason="", readiness=true. Elapsed: 26.852217893s May 12 12:33:26.939: INFO: Pod "downwardapi-volume-054e8035-f11a-4a00-b3df-00d908f3682e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.053276421s STEP: Saw pod success May 12 12:33:26.939: INFO: Pod "downwardapi-volume-054e8035-f11a-4a00-b3df-00d908f3682e" satisfied condition "Succeeded or Failed" May 12 12:33:27.096: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-054e8035-f11a-4a00-b3df-00d908f3682e container client-container: STEP: delete the pod May 12 12:33:27.750: INFO: Waiting for pod downwardapi-volume-054e8035-f11a-4a00-b3df-00d908f3682e to disappear May 12 12:33:28.042: INFO: Pod downwardapi-volume-054e8035-f11a-4a00-b3df-00d908f3682e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:33:28.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-99" for this suite. • [SLOW TEST:30.493 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":27,"skipped":415,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:33:28.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 12:33:30.675: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 12:33:32.684: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883610, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883610, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883610, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883610, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 12:33:34.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883610, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883610, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883610, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883610, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 12:33:37.706: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 12 12:33:37.726: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:33:37.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3741" for this suite. STEP: Destroying namespace "webhook-3741-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.747 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":28,"skipped":421,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:33:37.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 12 12:33:37.949: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9b772bdf-05d6-456b-9240-551884d2af59" in namespace "projected-9727" to be "Succeeded or Failed" May 12 12:33:38.035: INFO: Pod "downwardapi-volume-9b772bdf-05d6-456b-9240-551884d2af59": Phase="Pending", Reason="", readiness=false. Elapsed: 86.108084ms May 12 12:33:40.240: INFO: Pod "downwardapi-volume-9b772bdf-05d6-456b-9240-551884d2af59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290607885s May 12 12:33:42.243: INFO: Pod "downwardapi-volume-9b772bdf-05d6-456b-9240-551884d2af59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.293979512s STEP: Saw pod success May 12 12:33:42.243: INFO: Pod "downwardapi-volume-9b772bdf-05d6-456b-9240-551884d2af59" satisfied condition "Succeeded or Failed" May 12 12:33:42.246: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-9b772bdf-05d6-456b-9240-551884d2af59 container client-container: STEP: delete the pod May 12 12:33:42.441: INFO: Waiting for pod downwardapi-volume-9b772bdf-05d6-456b-9240-551884d2af59 to disappear May 12 12:33:42.531: INFO: Pod downwardapi-volume-9b772bdf-05d6-456b-9240-551884d2af59 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:33:42.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9727" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":29,"skipped":423,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:33:42.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating pod May 12 12:33:46.725: INFO: Pod pod-hostip-64236e14-4b99-4d00-94b4-5fc465e0f933 has hostIP: 172.17.0.15 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:33:46.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7059" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":30,"skipped":445,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:33:46.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 12 12:33:46.850: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 12 12:33:49.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1711 create -f -' May 12 12:33:57.544: INFO: stderr: "" May 12 12:33:57.544: INFO: stdout: "e2e-test-crd-publish-openapi-8764-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 12 12:33:57.544: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1711 delete e2e-test-crd-publish-openapi-8764-crds test-cr' May 12 12:33:57.748: INFO: stderr: "" May 12 12:33:57.748: INFO: stdout: "e2e-test-crd-publish-openapi-8764-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 12 12:33:57.748: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1711 apply -f -' May 12 12:33:58.079: INFO: stderr: "" May 12 12:33:58.079: INFO: stdout: "e2e-test-crd-publish-openapi-8764-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 12 12:33:58.079: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1711 delete e2e-test-crd-publish-openapi-8764-crds test-cr' May 12 12:33:58.223: INFO: stderr: "" May 12 12:33:58.223: INFO: stdout: "e2e-test-crd-publish-openapi-8764-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 12 12:33:58.223: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8764-crds' May 12 12:33:58.490: INFO: stderr: "" May 12 12:33:58.490: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8764-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:34:01.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1711" for this suite. • [SLOW TEST:14.954 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":31,"skipped":461,"failed":0} [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:34:01.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-1507 STEP: creating a selector STEP: Creating the service pods in kubernetes May 12 12:34:03.032: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 12 12:34:03.945: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 12 12:34:05.992: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 12 12:34:07.948: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 12:34:09.948: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 12:34:11.950: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 12:34:13.949: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 12:34:15.948: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 12:34:17.949: INFO: The status of Pod netserver-0 is Running (Ready = true) May 12 12:34:18.026: INFO: The status of Pod netserver-1 is Running (Ready = false) May 12 12:34:20.030: INFO: The status of Pod netserver-1 is Running (Ready = false) May 12 12:34:22.545: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 12 12:34:33.143: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.219 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1507 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 12:34:33.143: INFO: >>> kubeConfig: /root/.kube/config I0512 12:34:33.273714 7 log.go:172] (0xc003efa000) (0xc001eb4820) Create stream I0512 12:34:33.273767 7 log.go:172] (0xc003efa000) (0xc001eb4820) Stream added, broadcasting: 1 I0512 12:34:33.275719 7 log.go:172] (0xc003efa000) Reply frame received for 1 I0512 12:34:33.275759 7 log.go:172] (0xc003efa000) (0xc001eb4a00) Create stream I0512 12:34:33.275773 7 log.go:172] (0xc003efa000) (0xc001eb4a00) Stream added, broadcasting: 3 I0512 12:34:33.276857 7 log.go:172] (0xc003efa000) Reply frame received for 3 I0512 12:34:33.276899 7 log.go:172] (0xc003efa000) (0xc001eb4aa0) Create stream I0512 12:34:33.276915 7 log.go:172] (0xc003efa000) (0xc001eb4aa0) Stream added, broadcasting: 5 I0512 12:34:33.278106 7 log.go:172] (0xc003efa000) Reply frame received for 5 I0512 12:34:34.333484 7 log.go:172] (0xc003efa000) Data frame received for 3 I0512 12:34:34.333600 7 log.go:172] (0xc001eb4a00) (3) Data frame handling I0512 12:34:34.333654 7 log.go:172] (0xc001eb4a00) (3) Data frame sent I0512 12:34:34.333780 7 log.go:172] (0xc003efa000) Data frame received for 5 I0512 12:34:34.333809 7 log.go:172] (0xc001eb4aa0) (5) Data frame handling I0512 12:34:34.334038 7 log.go:172] (0xc003efa000) Data frame received for 3 I0512 12:34:34.334073 7 log.go:172] (0xc001eb4a00) (3) Data frame handling I0512 12:34:34.337653 7 log.go:172] (0xc003efa000) Data frame received for 1 I0512 12:34:34.337729 7 log.go:172] (0xc001eb4820) (1) Data frame handling I0512 12:34:34.337750 7 log.go:172] (0xc001eb4820) (1) Data frame sent I0512 12:34:34.337766 7 log.go:172] (0xc003efa000) (0xc001eb4820) Stream removed, broadcasting: 1 I0512 12:34:34.337859 7 log.go:172] (0xc003efa000) Go away received I0512 12:34:34.338187 7 log.go:172] (0xc003efa000) (0xc001eb4820) Stream removed, broadcasting: 1 I0512 12:34:34.338206 7 log.go:172] (0xc003efa000) (0xc001eb4a00) Stream removed, broadcasting: 3 I0512 12:34:34.338216 7 log.go:172] (0xc003efa000) (0xc001eb4aa0) Stream removed, broadcasting: 5 May 12 12:34:34.338: INFO: Found all expected endpoints: [netserver-0] May 12 12:34:34.355: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.25 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1507 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 12:34:34.355: INFO: >>> kubeConfig: /root/.kube/config I0512 12:34:34.485105 7 log.go:172] (0xc003efa580) (0xc001eb4f00) Create stream I0512 12:34:34.485273 7 log.go:172] (0xc003efa580) (0xc001eb4f00) Stream added, broadcasting: 1 I0512 12:34:34.487015 7 log.go:172] (0xc003efa580) Reply frame received for 1 I0512 12:34:34.487040 7 log.go:172] (0xc003efa580) (0xc001a83860) Create stream I0512 12:34:34.487052 7 log.go:172] (0xc003efa580) (0xc001a83860) Stream added, broadcasting: 3 I0512 12:34:34.487852 7 log.go:172] (0xc003efa580) Reply frame received for 3 I0512 12:34:34.487895 7 log.go:172] (0xc003efa580) (0xc001e6b9a0) Create stream I0512 12:34:34.487906 7 log.go:172] (0xc003efa580) (0xc001e6b9a0) Stream added, broadcasting: 5 I0512 12:34:34.488646 7 log.go:172] (0xc003efa580) Reply frame received for 5 I0512 12:34:35.587966 7 log.go:172] (0xc003efa580) Data frame received for 3 I0512 12:34:35.588017 7 log.go:172] (0xc001a83860) (3) Data frame handling I0512 12:34:35.588051 7 log.go:172] (0xc001a83860) (3) Data frame sent I0512 12:34:35.588098 7 log.go:172] (0xc003efa580) Data frame received for 3 I0512 12:34:35.588123 7 log.go:172] (0xc001a83860) (3) Data frame handling I0512 12:34:35.588200 7 log.go:172] (0xc003efa580) Data frame received for 5 I0512 12:34:35.588227 7 log.go:172] (0xc001e6b9a0) (5) Data frame handling I0512 12:34:35.590023 7 log.go:172] (0xc003efa580) Data frame received for 1 I0512 12:34:35.590097 7 log.go:172] (0xc001eb4f00) (1) Data frame handling I0512 12:34:35.590122 7 log.go:172] (0xc001eb4f00) (1) Data frame sent I0512 12:34:35.590142 7 log.go:172] (0xc003efa580) (0xc001eb4f00) Stream removed, broadcasting: 1 I0512 12:34:35.590173 7 log.go:172] (0xc003efa580) Go away received I0512 12:34:35.590261 7 log.go:172] (0xc003efa580) (0xc001eb4f00) Stream removed, broadcasting: 1 I0512 12:34:35.590279 7 log.go:172] (0xc003efa580) (0xc001a83860) Stream removed, broadcasting: 3 I0512 12:34:35.590287 7 log.go:172] (0xc003efa580) (0xc001e6b9a0) Stream removed, broadcasting: 5 May 12 12:34:35.590: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:34:35.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1507" for this suite. • [SLOW TEST:33.910 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":32,"skipped":461,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:34:35.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium May 12 12:34:35.763: INFO: Waiting up to 5m0s for pod "pod-a24c0325-30f4-48f8-a273-cc895b4862b7" in namespace "emptydir-4980" to be "Succeeded or Failed" May 12 12:34:35.788: INFO: Pod "pod-a24c0325-30f4-48f8-a273-cc895b4862b7": Phase="Pending", Reason="", readiness=false. Elapsed: 25.304023ms May 12 12:34:37.791: INFO: Pod "pod-a24c0325-30f4-48f8-a273-cc895b4862b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02826826s May 12 12:34:39.804: INFO: Pod "pod-a24c0325-30f4-48f8-a273-cc895b4862b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040886701s STEP: Saw pod success May 12 12:34:39.804: INFO: Pod "pod-a24c0325-30f4-48f8-a273-cc895b4862b7" satisfied condition "Succeeded or Failed" May 12 12:34:39.806: INFO: Trying to get logs from node kali-worker pod pod-a24c0325-30f4-48f8-a273-cc895b4862b7 container test-container: STEP: delete the pod May 12 12:34:39.862: INFO: Waiting for pod pod-a24c0325-30f4-48f8-a273-cc895b4862b7 to disappear May 12 12:34:39.864: INFO: Pod pod-a24c0325-30f4-48f8-a273-cc895b4862b7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:34:39.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4980" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":33,"skipped":473,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:34:39.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 12 12:34:39.932: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fa195daa-229e-4c35-b81b-8a80c6634529" in namespace "projected-3076" to be "Succeeded or Failed" May 12 12:34:40.000: INFO: Pod "downwardapi-volume-fa195daa-229e-4c35-b81b-8a80c6634529": Phase="Pending", Reason="", readiness=false. Elapsed: 67.217416ms May 12 12:34:42.348: INFO: Pod "downwardapi-volume-fa195daa-229e-4c35-b81b-8a80c6634529": Phase="Pending", Reason="", readiness=false. Elapsed: 2.415209909s May 12 12:34:44.350: INFO: Pod "downwardapi-volume-fa195daa-229e-4c35-b81b-8a80c6634529": Phase="Pending", Reason="", readiness=false. Elapsed: 4.417927314s May 12 12:34:46.413: INFO: Pod "downwardapi-volume-fa195daa-229e-4c35-b81b-8a80c6634529": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.480981602s STEP: Saw pod success May 12 12:34:46.413: INFO: Pod "downwardapi-volume-fa195daa-229e-4c35-b81b-8a80c6634529" satisfied condition "Succeeded or Failed" May 12 12:34:46.420: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-fa195daa-229e-4c35-b81b-8a80c6634529 container client-container: STEP: delete the pod May 12 12:34:46.595: INFO: Waiting for pod downwardapi-volume-fa195daa-229e-4c35-b81b-8a80c6634529 to disappear May 12 12:34:46.606: INFO: Pod downwardapi-volume-fa195daa-229e-4c35-b81b-8a80c6634529 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:34:46.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3076" for this suite. • [SLOW TEST:6.738 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":34,"skipped":487,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:34:46.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 12 12:34:46.801: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3903 /api/v1/namespaces/watch-3903/configmaps/e2e-watch-test-label-changed a07dab80-0e1b-4809-bbf8-a9ec3045446a 3720160 0 2020-05-12 12:34:46 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-12 12:34:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 12 12:34:46.801: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3903 /api/v1/namespaces/watch-3903/configmaps/e2e-watch-test-label-changed a07dab80-0e1b-4809-bbf8-a9ec3045446a 3720161 0 2020-05-12 12:34:46 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-12 12:34:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 12 12:34:46.801: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3903 /api/v1/namespaces/watch-3903/configmaps/e2e-watch-test-label-changed a07dab80-0e1b-4809-bbf8-a9ec3045446a 3720162 0 2020-05-12 12:34:46 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-12 12:34:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 12 12:34:56.943: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3903 /api/v1/namespaces/watch-3903/configmaps/e2e-watch-test-label-changed a07dab80-0e1b-4809-bbf8-a9ec3045446a 3720201 0 2020-05-12 12:34:46 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-12 12:34:56 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 12 12:34:56.943: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3903 /api/v1/namespaces/watch-3903/configmaps/e2e-watch-test-label-changed a07dab80-0e1b-4809-bbf8-a9ec3045446a 3720202 0 2020-05-12 12:34:46 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-12 12:34:56 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 12 12:34:56.943: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3903 /api/v1/namespaces/watch-3903/configmaps/e2e-watch-test-label-changed a07dab80-0e1b-4809-bbf8-a9ec3045446a 3720203 0 2020-05-12 12:34:46 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-12 12:34:56 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:34:56.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3903" for this suite. • [SLOW TEST:10.338 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":35,"skipped":489,"failed":0} S ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:34:56.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 12 12:35:03.752: INFO: Successfully updated pod "pod-update-activedeadlineseconds-73ba34b7-b995-4a63-9b6e-c017c2cef71e" May 12 12:35:03.752: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-73ba34b7-b995-4a63-9b6e-c017c2cef71e" in namespace "pods-5673" to be "terminated due to deadline exceeded" May 12 12:35:03.787: INFO: Pod "pod-update-activedeadlineseconds-73ba34b7-b995-4a63-9b6e-c017c2cef71e": Phase="Running", Reason="", readiness=true. Elapsed: 35.115963ms May 12 12:35:05.791: INFO: Pod "pod-update-activedeadlineseconds-73ba34b7-b995-4a63-9b6e-c017c2cef71e": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.038716393s May 12 12:35:05.791: INFO: Pod "pod-update-activedeadlineseconds-73ba34b7-b995-4a63-9b6e-c017c2cef71e" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:35:05.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5673" for this suite. • [SLOW TEST:8.849 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":36,"skipped":490,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:35:05.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-b275b474-ce08-4a4c-ad86-a2e1df33777c STEP: Creating a pod to test consume secrets May 12 12:35:06.089: INFO: Waiting up to 5m0s for pod "pod-secrets-c4654438-6b3c-42a7-a8a8-7877089dbf65" in namespace "secrets-5769" to be "Succeeded or Failed" May 12 12:35:06.111: INFO: Pod "pod-secrets-c4654438-6b3c-42a7-a8a8-7877089dbf65": Phase="Pending", Reason="", readiness=false. Elapsed: 21.314914ms May 12 12:35:08.115: INFO: Pod "pod-secrets-c4654438-6b3c-42a7-a8a8-7877089dbf65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025458239s May 12 12:35:10.118: INFO: Pod "pod-secrets-c4654438-6b3c-42a7-a8a8-7877089dbf65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028913848s STEP: Saw pod success May 12 12:35:10.118: INFO: Pod "pod-secrets-c4654438-6b3c-42a7-a8a8-7877089dbf65" satisfied condition "Succeeded or Failed" May 12 12:35:10.121: INFO: Trying to get logs from node kali-worker pod pod-secrets-c4654438-6b3c-42a7-a8a8-7877089dbf65 container secret-env-test: STEP: delete the pod May 12 12:35:10.175: INFO: Waiting for pod pod-secrets-c4654438-6b3c-42a7-a8a8-7877089dbf65 to disappear May 12 12:35:10.203: INFO: Pod pod-secrets-c4654438-6b3c-42a7-a8a8-7877089dbf65 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:35:10.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5769" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":37,"skipped":551,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:35:10.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-5f4c6143-e36e-4f2f-ab2c-7ef9aeaddd5a STEP: Creating a pod to test consume secrets May 12 12:35:10.303: INFO: Waiting up to 5m0s for pod "pod-secrets-e8353a73-2054-4c58-b255-bb4eea42a415" in namespace "secrets-2767" to be "Succeeded or Failed" May 12 12:35:10.319: INFO: Pod "pod-secrets-e8353a73-2054-4c58-b255-bb4eea42a415": Phase="Pending", Reason="", readiness=false. Elapsed: 15.887665ms May 12 12:35:12.323: INFO: Pod "pod-secrets-e8353a73-2054-4c58-b255-bb4eea42a415": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019559359s May 12 12:35:14.563: INFO: Pod "pod-secrets-e8353a73-2054-4c58-b255-bb4eea42a415": Phase="Pending", Reason="", readiness=false. Elapsed: 4.259415405s May 12 12:35:16.566: INFO: Pod "pod-secrets-e8353a73-2054-4c58-b255-bb4eea42a415": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.262588809s STEP: Saw pod success May 12 12:35:16.566: INFO: Pod "pod-secrets-e8353a73-2054-4c58-b255-bb4eea42a415" satisfied condition "Succeeded or Failed" May 12 12:35:16.568: INFO: Trying to get logs from node kali-worker pod pod-secrets-e8353a73-2054-4c58-b255-bb4eea42a415 container secret-volume-test: STEP: delete the pod May 12 12:35:16.756: INFO: Waiting for pod pod-secrets-e8353a73-2054-4c58-b255-bb4eea42a415 to disappear May 12 12:35:16.781: INFO: Pod pod-secrets-e8353a73-2054-4c58-b255-bb4eea42a415 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:35:16.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2767" for this suite. • [SLOW TEST:6.591 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":38,"skipped":563,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:35:16.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 12 12:35:22.947: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-7765 PodName:pod-sharedvolume-617a59dd-0e0f-4b3c-82b0-f3beeceed4ad ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 12:35:22.947: INFO: >>> kubeConfig: /root/.kube/config I0512 12:35:23.079451 7 log.go:172] (0xc003efb130) (0xc001b186e0) Create stream I0512 12:35:23.079477 7 log.go:172] (0xc003efb130) (0xc001b186e0) Stream added, broadcasting: 1 I0512 12:35:23.080824 7 log.go:172] (0xc003efb130) Reply frame received for 1 I0512 12:35:23.080847 7 log.go:172] (0xc003efb130) (0xc002d928c0) Create stream I0512 12:35:23.080857 7 log.go:172] (0xc003efb130) (0xc002d928c0) Stream added, broadcasting: 3 I0512 12:35:23.081795 7 log.go:172] (0xc003efb130) Reply frame received for 3 I0512 12:35:23.081837 7 log.go:172] (0xc003efb130) (0xc0028580a0) Create stream I0512 12:35:23.081850 7 log.go:172] (0xc003efb130) (0xc0028580a0) Stream added, broadcasting: 5 I0512 12:35:23.082508 7 log.go:172] (0xc003efb130) Reply frame received for 5 I0512 12:35:23.122380 7 log.go:172] (0xc003efb130) Data frame received for 5 I0512 12:35:23.122434 7 log.go:172] (0xc0028580a0) (5) Data frame handling I0512 12:35:23.122466 7 log.go:172] (0xc003efb130) Data frame received for 3 I0512 12:35:23.122481 7 log.go:172] (0xc002d928c0) (3) Data frame handling I0512 12:35:23.122501 7 log.go:172] (0xc002d928c0) (3) Data frame sent I0512 12:35:23.122524 7 log.go:172] (0xc003efb130) Data frame received for 3 I0512 12:35:23.122539 7 log.go:172] (0xc002d928c0) (3) Data frame handling I0512 12:35:23.124218 7 log.go:172] (0xc003efb130) Data frame received for 1 I0512 12:35:23.124292 7 log.go:172] (0xc001b186e0) (1) Data frame handling I0512 12:35:23.124334 7 log.go:172] (0xc001b186e0) (1) Data frame sent I0512 12:35:23.124366 7 log.go:172] (0xc003efb130) (0xc001b186e0) Stream removed, broadcasting: 1 I0512 12:35:23.124418 7 log.go:172] (0xc003efb130) Go away received I0512 12:35:23.124607 7 log.go:172] (0xc003efb130) (0xc001b186e0) Stream removed, broadcasting: 1 I0512 12:35:23.124644 7 log.go:172] (0xc003efb130) (0xc002d928c0) Stream removed, broadcasting: 3 I0512 12:35:23.124659 7 log.go:172] (0xc003efb130) (0xc0028580a0) Stream removed, broadcasting: 5 May 12 12:35:23.124: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:35:23.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7765" for this suite. • [SLOW TEST:6.354 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":39,"skipped":571,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:35:23.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:35:23.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-2974" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":40,"skipped":577,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:35:23.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name secret-emptykey-test-29585847-6c66-4d35-a6ed-6bc68706708f [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:35:23.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3957" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":41,"skipped":594,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:35:23.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-0ec2e8e5-3781-44f9-bb0d-9af8013a2176 STEP: Creating a pod to test consume configMaps May 12 12:35:23.598: INFO: Waiting up to 5m0s for pod "pod-configmaps-75b95beb-3f9e-424d-a171-216c1a692a10" in namespace "configmap-9230" to be "Succeeded or Failed" May 12 12:35:23.601: INFO: Pod "pod-configmaps-75b95beb-3f9e-424d-a171-216c1a692a10": Phase="Pending", Reason="", readiness=false. Elapsed: 3.382828ms May 12 12:35:25.605: INFO: Pod "pod-configmaps-75b95beb-3f9e-424d-a171-216c1a692a10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007624121s May 12 12:35:27.609: INFO: Pod "pod-configmaps-75b95beb-3f9e-424d-a171-216c1a692a10": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011460947s May 12 12:35:29.659: INFO: Pod "pod-configmaps-75b95beb-3f9e-424d-a171-216c1a692a10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.060984167s STEP: Saw pod success May 12 12:35:29.659: INFO: Pod "pod-configmaps-75b95beb-3f9e-424d-a171-216c1a692a10" satisfied condition "Succeeded or Failed" May 12 12:35:29.662: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-75b95beb-3f9e-424d-a171-216c1a692a10 container configmap-volume-test: STEP: delete the pod May 12 12:35:29.701: INFO: Waiting for pod pod-configmaps-75b95beb-3f9e-424d-a171-216c1a692a10 to disappear May 12 12:35:29.715: INFO: Pod pod-configmaps-75b95beb-3f9e-424d-a171-216c1a692a10 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:35:29.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9230" for this suite. • [SLOW TEST:6.309 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":42,"skipped":601,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:35:29.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium May 12 12:35:29.805: INFO: Waiting up to 5m0s for pod "pod-29ba9929-d3ec-45df-af8a-61ca0ec9091b" in namespace "emptydir-8348" to be "Succeeded or Failed" May 12 12:35:29.818: INFO: Pod "pod-29ba9929-d3ec-45df-af8a-61ca0ec9091b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.604476ms May 12 12:35:31.822: INFO: Pod "pod-29ba9929-d3ec-45df-af8a-61ca0ec9091b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016793802s May 12 12:35:33.825: INFO: Pod "pod-29ba9929-d3ec-45df-af8a-61ca0ec9091b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019617597s STEP: Saw pod success May 12 12:35:33.825: INFO: Pod "pod-29ba9929-d3ec-45df-af8a-61ca0ec9091b" satisfied condition "Succeeded or Failed" May 12 12:35:33.827: INFO: Trying to get logs from node kali-worker2 pod pod-29ba9929-d3ec-45df-af8a-61ca0ec9091b container test-container: STEP: delete the pod May 12 12:35:33.883: INFO: Waiting for pod pod-29ba9929-d3ec-45df-af8a-61ca0ec9091b to disappear May 12 12:35:34.059: INFO: Pod pod-29ba9929-d3ec-45df-af8a-61ca0ec9091b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:35:34.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8348" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":43,"skipped":636,"failed":0} S ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:35:34.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller May 12 12:35:34.318: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1898' May 12 12:35:34.849: INFO: stderr: "" May 12 12:35:34.849: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 12:35:34.849: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1898' May 12 12:35:35.059: INFO: stderr: "" May 12 12:35:35.059: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 May 12 12:35:40.060: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1898' May 12 12:35:40.163: INFO: stderr: "" May 12 12:35:40.163: INFO: stdout: "update-demo-nautilus-6spg2 update-demo-nautilus-9f527 " May 12 12:35:40.163: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6spg2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1898' May 12 12:35:40.427: INFO: stderr: "" May 12 12:35:40.427: INFO: stdout: "" May 12 12:35:40.427: INFO: update-demo-nautilus-6spg2 is created but not running May 12 12:35:45.427: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1898' May 12 12:35:45.561: INFO: stderr: "" May 12 12:35:45.561: INFO: stdout: "update-demo-nautilus-6spg2 update-demo-nautilus-9f527 " May 12 12:35:45.561: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6spg2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1898' May 12 12:35:45.642: INFO: stderr: "" May 12 12:35:45.642: INFO: stdout: "true" May 12 12:35:45.642: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6spg2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1898' May 12 12:35:45.732: INFO: stderr: "" May 12 12:35:45.732: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 12:35:45.732: INFO: validating pod update-demo-nautilus-6spg2 May 12 12:35:45.736: INFO: got data: { "image": "nautilus.jpg" } May 12 12:35:45.736: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 12:35:45.736: INFO: update-demo-nautilus-6spg2 is verified up and running May 12 12:35:45.736: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9f527 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1898' May 12 12:35:45.821: INFO: stderr: "" May 12 12:35:45.821: INFO: stdout: "true" May 12 12:35:45.821: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9f527 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1898' May 12 12:35:45.906: INFO: stderr: "" May 12 12:35:45.906: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 12:35:45.906: INFO: validating pod update-demo-nautilus-9f527 May 12 12:35:45.909: INFO: got data: { "image": "nautilus.jpg" } May 12 12:35:45.909: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 12:35:45.909: INFO: update-demo-nautilus-9f527 is verified up and running STEP: scaling down the replication controller May 12 12:35:45.911: INFO: scanned /root for discovery docs: May 12 12:35:45.911: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1898' May 12 12:35:47.395: INFO: stderr: "" May 12 12:35:47.395: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 12:35:47.395: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1898' May 12 12:35:47.610: INFO: stderr: "" May 12 12:35:47.610: INFO: stdout: "update-demo-nautilus-6spg2 update-demo-nautilus-9f527 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 12 12:35:52.610: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1898' May 12 12:35:52.712: INFO: stderr: "" May 12 12:35:52.712: INFO: stdout: "update-demo-nautilus-6spg2 update-demo-nautilus-9f527 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 12 12:35:57.713: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1898' May 12 12:35:57.813: INFO: stderr: "" May 12 12:35:57.813: INFO: stdout: "update-demo-nautilus-6spg2 " May 12 12:35:57.813: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6spg2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1898' May 12 12:35:57.910: INFO: stderr: "" May 12 12:35:57.910: INFO: stdout: "true" May 12 12:35:57.910: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6spg2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1898' May 12 12:35:58.095: INFO: stderr: "" May 12 12:35:58.095: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 12:35:58.095: INFO: validating pod update-demo-nautilus-6spg2 May 12 12:35:58.099: INFO: got data: { "image": "nautilus.jpg" } May 12 12:35:58.099: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 12:35:58.099: INFO: update-demo-nautilus-6spg2 is verified up and running STEP: scaling up the replication controller May 12 12:35:58.101: INFO: scanned /root for discovery docs: May 12 12:35:58.101: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1898' May 12 12:35:59.230: INFO: stderr: "" May 12 12:35:59.230: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 12:35:59.230: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1898' May 12 12:35:59.321: INFO: stderr: "" May 12 12:35:59.321: INFO: stdout: "update-demo-nautilus-6spg2 update-demo-nautilus-rrccc " May 12 12:35:59.321: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6spg2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1898' May 12 12:35:59.412: INFO: stderr: "" May 12 12:35:59.412: INFO: stdout: "true" May 12 12:35:59.412: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6spg2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1898' May 12 12:35:59.494: INFO: stderr: "" May 12 12:35:59.494: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 12:35:59.494: INFO: validating pod update-demo-nautilus-6spg2 May 12 12:35:59.497: INFO: got data: { "image": "nautilus.jpg" } May 12 12:35:59.497: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 12:35:59.497: INFO: update-demo-nautilus-6spg2 is verified up and running May 12 12:35:59.497: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rrccc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1898' May 12 12:35:59.757: INFO: stderr: "" May 12 12:35:59.757: INFO: stdout: "" May 12 12:35:59.757: INFO: update-demo-nautilus-rrccc is created but not running May 12 12:36:04.757: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1898' May 12 12:36:04.861: INFO: stderr: "" May 12 12:36:04.861: INFO: stdout: "update-demo-nautilus-6spg2 update-demo-nautilus-rrccc " May 12 12:36:04.861: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6spg2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1898' May 12 12:36:04.962: INFO: stderr: "" May 12 12:36:04.962: INFO: stdout: "true" May 12 12:36:04.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6spg2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1898' May 12 12:36:05.064: INFO: stderr: "" May 12 12:36:05.064: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 12:36:05.064: INFO: validating pod update-demo-nautilus-6spg2 May 12 12:36:05.067: INFO: got data: { "image": "nautilus.jpg" } May 12 12:36:05.067: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 12:36:05.067: INFO: update-demo-nautilus-6spg2 is verified up and running May 12 12:36:05.067: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rrccc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1898' May 12 12:36:05.167: INFO: stderr: "" May 12 12:36:05.167: INFO: stdout: "true" May 12 12:36:05.167: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rrccc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1898' May 12 12:36:05.267: INFO: stderr: "" May 12 12:36:05.267: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 12:36:05.267: INFO: validating pod update-demo-nautilus-rrccc May 12 12:36:05.271: INFO: got data: { "image": "nautilus.jpg" } May 12 12:36:05.271: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 12:36:05.271: INFO: update-demo-nautilus-rrccc is verified up and running STEP: using delete to clean up resources May 12 12:36:05.271: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1898' May 12 12:36:05.405: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 12:36:05.405: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 12 12:36:05.405: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1898' May 12 12:36:05.517: INFO: stderr: "No resources found in kubectl-1898 namespace.\n" May 12 12:36:05.517: INFO: stdout: "" May 12 12:36:05.517: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1898 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 12 12:36:05.614: INFO: stderr: "" May 12 12:36:05.614: INFO: stdout: "update-demo-nautilus-6spg2\nupdate-demo-nautilus-rrccc\n" May 12 12:36:06.114: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1898' May 12 12:36:06.212: INFO: stderr: "No resources found in kubectl-1898 namespace.\n" May 12 12:36:06.212: INFO: stdout: "" May 12 12:36:06.212: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1898 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 12 12:36:06.316: INFO: stderr: "" May 12 12:36:06.316: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:36:06.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1898" for this suite. • [SLOW TEST:32.255 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":275,"completed":44,"skipped":637,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:36:06.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 May 12 12:36:06.751: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 12:36:06.760: INFO: Waiting for terminating namespaces to be deleted... May 12 12:36:06.763: INFO: Logging pods the kubelet thinks is on node kali-worker before test May 12 12:36:06.769: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 12 12:36:06.769: INFO: Container kindnet-cni ready: true, restart count 1 May 12 12:36:06.769: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 12 12:36:06.769: INFO: Container kube-proxy ready: true, restart count 0 May 12 12:36:06.769: INFO: update-demo-nautilus-6spg2 from kubectl-1898 started at 2020-05-12 12:35:35 +0000 UTC (1 container statuses recorded) May 12 12:36:06.769: INFO: Container update-demo ready: true, restart count 0 May 12 12:36:06.769: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test May 12 12:36:06.774: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 12 12:36:06.774: INFO: Container kube-proxy ready: true, restart count 0 May 12 12:36:06.774: INFO: update-demo-nautilus-rrccc from kubectl-1898 started at 2020-05-12 12:35:58 +0000 UTC (1 container statuses recorded) May 12 12:36:06.774: INFO: Container update-demo ready: true, restart count 0 May 12 12:36:06.774: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 12 12:36:06.774: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-c9d0cbbf-01b7-4cd1-8c27-99cac6710fcb 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-c9d0cbbf-01b7-4cd1-8c27-99cac6710fcb off the node kali-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-c9d0cbbf-01b7-4cd1-8c27-99cac6710fcb [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:36:23.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5760" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:16.759 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":45,"skipped":656,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:36:23.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on tmpfs May 12 12:36:23.239: INFO: Waiting up to 5m0s for pod "pod-8fdd638c-b6df-4078-a36d-3b4dcdd0d0f1" in namespace "emptydir-9237" to be "Succeeded or Failed" May 12 12:36:23.243: INFO: Pod "pod-8fdd638c-b6df-4078-a36d-3b4dcdd0d0f1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.493443ms May 12 12:36:25.366: INFO: Pod "pod-8fdd638c-b6df-4078-a36d-3b4dcdd0d0f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126351693s May 12 12:36:27.370: INFO: Pod "pod-8fdd638c-b6df-4078-a36d-3b4dcdd0d0f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.130244753s STEP: Saw pod success May 12 12:36:27.370: INFO: Pod "pod-8fdd638c-b6df-4078-a36d-3b4dcdd0d0f1" satisfied condition "Succeeded or Failed" May 12 12:36:27.372: INFO: Trying to get logs from node kali-worker pod pod-8fdd638c-b6df-4078-a36d-3b4dcdd0d0f1 container test-container: STEP: delete the pod May 12 12:36:27.411: INFO: Waiting for pod pod-8fdd638c-b6df-4078-a36d-3b4dcdd0d0f1 to disappear May 12 12:36:27.416: INFO: Pod pod-8fdd638c-b6df-4078-a36d-3b4dcdd0d0f1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:36:27.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9237" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":46,"skipped":686,"failed":0} ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:36:27.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-czpd STEP: Creating a pod to test atomic-volume-subpath May 12 12:36:27.560: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-czpd" in namespace "subpath-7224" to be "Succeeded or Failed" May 12 12:36:27.592: INFO: Pod "pod-subpath-test-configmap-czpd": Phase="Pending", Reason="", readiness=false. Elapsed: 32.323203ms May 12 12:36:29.629: INFO: Pod "pod-subpath-test-configmap-czpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069511581s May 12 12:36:31.664: INFO: Pod "pod-subpath-test-configmap-czpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104370983s May 12 12:36:33.667: INFO: Pod "pod-subpath-test-configmap-czpd": Phase="Running", Reason="", readiness=true. Elapsed: 6.107694194s May 12 12:36:35.671: INFO: Pod "pod-subpath-test-configmap-czpd": Phase="Running", Reason="", readiness=true. Elapsed: 8.111610047s May 12 12:36:37.676: INFO: Pod "pod-subpath-test-configmap-czpd": Phase="Running", Reason="", readiness=true. Elapsed: 10.11617901s May 12 12:36:39.680: INFO: Pod "pod-subpath-test-configmap-czpd": Phase="Running", Reason="", readiness=true. Elapsed: 12.11993161s May 12 12:36:41.683: INFO: Pod "pod-subpath-test-configmap-czpd": Phase="Running", Reason="", readiness=true. Elapsed: 14.122847688s May 12 12:36:43.686: INFO: Pod "pod-subpath-test-configmap-czpd": Phase="Running", Reason="", readiness=true. Elapsed: 16.126707196s May 12 12:36:45.690: INFO: Pod "pod-subpath-test-configmap-czpd": Phase="Running", Reason="", readiness=true. Elapsed: 18.130659897s May 12 12:36:47.694: INFO: Pod "pod-subpath-test-configmap-czpd": Phase="Running", Reason="", readiness=true. Elapsed: 20.134342596s May 12 12:36:49.698: INFO: Pod "pod-subpath-test-configmap-czpd": Phase="Running", Reason="", readiness=true. Elapsed: 22.138633999s May 12 12:36:51.746: INFO: Pod "pod-subpath-test-configmap-czpd": Phase="Running", Reason="", readiness=true. Elapsed: 24.185955861s May 12 12:36:53.748: INFO: Pod "pod-subpath-test-configmap-czpd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.188398131s STEP: Saw pod success May 12 12:36:53.748: INFO: Pod "pod-subpath-test-configmap-czpd" satisfied condition "Succeeded or Failed" May 12 12:36:53.750: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-configmap-czpd container test-container-subpath-configmap-czpd: STEP: delete the pod May 12 12:36:53.876: INFO: Waiting for pod pod-subpath-test-configmap-czpd to disappear May 12 12:36:53.921: INFO: Pod pod-subpath-test-configmap-czpd no longer exists STEP: Deleting pod pod-subpath-test-configmap-czpd May 12 12:36:53.921: INFO: Deleting pod "pod-subpath-test-configmap-czpd" in namespace "subpath-7224" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:36:53.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7224" for this suite. • [SLOW TEST:26.507 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":47,"skipped":686,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:36:53.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-63968e0f-26d7-412e-844e-7923e4d70bbd STEP: Creating a pod to test consume configMaps May 12 12:36:54.310: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-50ef567b-68ae-4264-aaab-57aa0bec6b81" in namespace "projected-5661" to be "Succeeded or Failed" May 12 12:36:54.329: INFO: Pod "pod-projected-configmaps-50ef567b-68ae-4264-aaab-57aa0bec6b81": Phase="Pending", Reason="", readiness=false. Elapsed: 19.175251ms May 12 12:36:56.333: INFO: Pod "pod-projected-configmaps-50ef567b-68ae-4264-aaab-57aa0bec6b81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023018535s May 12 12:36:58.432: INFO: Pod "pod-projected-configmaps-50ef567b-68ae-4264-aaab-57aa0bec6b81": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121239393s May 12 12:37:00.435: INFO: Pod "pod-projected-configmaps-50ef567b-68ae-4264-aaab-57aa0bec6b81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.124784588s STEP: Saw pod success May 12 12:37:00.435: INFO: Pod "pod-projected-configmaps-50ef567b-68ae-4264-aaab-57aa0bec6b81" satisfied condition "Succeeded or Failed" May 12 12:37:00.438: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-50ef567b-68ae-4264-aaab-57aa0bec6b81 container projected-configmap-volume-test: STEP: delete the pod May 12 12:37:00.493: INFO: Waiting for pod pod-projected-configmaps-50ef567b-68ae-4264-aaab-57aa0bec6b81 to disappear May 12 12:37:00.517: INFO: Pod pod-projected-configmaps-50ef567b-68ae-4264-aaab-57aa0bec6b81 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:37:00.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5661" for this suite. • [SLOW TEST:6.596 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":48,"skipped":697,"failed":0} [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:37:00.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:37:11.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9339" for this suite. • [SLOW TEST:11.180 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":49,"skipped":697,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:37:11.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4823.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4823.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4823.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4823.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 12:37:19.934: INFO: DNS probes using dns-test-d9e992d8-89a7-4775-aa7e-99c6b866d040 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4823.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4823.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4823.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4823.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 12:37:28.094: INFO: File wheezy_udp@dns-test-service-3.dns-4823.svc.cluster.local from pod dns-4823/dns-test-bd174fd6-b9e1-4d47-843b-4f0782ca6f89 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 12:37:28.097: INFO: File jessie_udp@dns-test-service-3.dns-4823.svc.cluster.local from pod dns-4823/dns-test-bd174fd6-b9e1-4d47-843b-4f0782ca6f89 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 12:37:28.097: INFO: Lookups using dns-4823/dns-test-bd174fd6-b9e1-4d47-843b-4f0782ca6f89 failed for: [wheezy_udp@dns-test-service-3.dns-4823.svc.cluster.local jessie_udp@dns-test-service-3.dns-4823.svc.cluster.local] May 12 12:37:33.101: INFO: File wheezy_udp@dns-test-service-3.dns-4823.svc.cluster.local from pod dns-4823/dns-test-bd174fd6-b9e1-4d47-843b-4f0782ca6f89 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 12:37:33.104: INFO: File jessie_udp@dns-test-service-3.dns-4823.svc.cluster.local from pod dns-4823/dns-test-bd174fd6-b9e1-4d47-843b-4f0782ca6f89 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 12:37:33.104: INFO: Lookups using dns-4823/dns-test-bd174fd6-b9e1-4d47-843b-4f0782ca6f89 failed for: [wheezy_udp@dns-test-service-3.dns-4823.svc.cluster.local jessie_udp@dns-test-service-3.dns-4823.svc.cluster.local] May 12 12:37:38.100: INFO: File wheezy_udp@dns-test-service-3.dns-4823.svc.cluster.local from pod dns-4823/dns-test-bd174fd6-b9e1-4d47-843b-4f0782ca6f89 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 12:37:38.104: INFO: File jessie_udp@dns-test-service-3.dns-4823.svc.cluster.local from pod dns-4823/dns-test-bd174fd6-b9e1-4d47-843b-4f0782ca6f89 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 12:37:38.104: INFO: Lookups using dns-4823/dns-test-bd174fd6-b9e1-4d47-843b-4f0782ca6f89 failed for: [wheezy_udp@dns-test-service-3.dns-4823.svc.cluster.local jessie_udp@dns-test-service-3.dns-4823.svc.cluster.local] May 12 12:37:43.120: INFO: File wheezy_udp@dns-test-service-3.dns-4823.svc.cluster.local from pod dns-4823/dns-test-bd174fd6-b9e1-4d47-843b-4f0782ca6f89 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 12:37:43.123: INFO: File jessie_udp@dns-test-service-3.dns-4823.svc.cluster.local from pod dns-4823/dns-test-bd174fd6-b9e1-4d47-843b-4f0782ca6f89 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 12:37:43.123: INFO: Lookups using dns-4823/dns-test-bd174fd6-b9e1-4d47-843b-4f0782ca6f89 failed for: [wheezy_udp@dns-test-service-3.dns-4823.svc.cluster.local jessie_udp@dns-test-service-3.dns-4823.svc.cluster.local] May 12 12:37:48.102: INFO: File wheezy_udp@dns-test-service-3.dns-4823.svc.cluster.local from pod dns-4823/dns-test-bd174fd6-b9e1-4d47-843b-4f0782ca6f89 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 12:37:48.105: INFO: File jessie_udp@dns-test-service-3.dns-4823.svc.cluster.local from pod dns-4823/dns-test-bd174fd6-b9e1-4d47-843b-4f0782ca6f89 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 12:37:48.105: INFO: Lookups using dns-4823/dns-test-bd174fd6-b9e1-4d47-843b-4f0782ca6f89 failed for: [wheezy_udp@dns-test-service-3.dns-4823.svc.cluster.local jessie_udp@dns-test-service-3.dns-4823.svc.cluster.local] May 12 12:37:53.127: INFO: DNS probes using dns-test-bd174fd6-b9e1-4d47-843b-4f0782ca6f89 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4823.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4823.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4823.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4823.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 12:38:03.746: INFO: DNS probes using dns-test-b326af51-575f-4c30-b7a0-9819673567c2 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:38:03.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4823" for this suite. • [SLOW TEST:52.209 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":50,"skipped":711,"failed":0} SSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:38:03.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 12 12:38:04.308: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-bd83ee03-17ee-4e52-8201-6374bfbd895b" in namespace "security-context-test-7263" to be "Succeeded or Failed" May 12 12:38:04.374: INFO: Pod "busybox-privileged-false-bd83ee03-17ee-4e52-8201-6374bfbd895b": Phase="Pending", Reason="", readiness=false. Elapsed: 65.365565ms May 12 12:38:06.378: INFO: Pod "busybox-privileged-false-bd83ee03-17ee-4e52-8201-6374bfbd895b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069733932s May 12 12:38:08.462: INFO: Pod "busybox-privileged-false-bd83ee03-17ee-4e52-8201-6374bfbd895b": Phase="Running", Reason="", readiness=true. Elapsed: 4.1539853s May 12 12:38:10.481: INFO: Pod "busybox-privileged-false-bd83ee03-17ee-4e52-8201-6374bfbd895b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.172338882s May 12 12:38:10.481: INFO: Pod "busybox-privileged-false-bd83ee03-17ee-4e52-8201-6374bfbd895b" satisfied condition "Succeeded or Failed" May 12 12:38:10.540: INFO: Got logs for pod "busybox-privileged-false-bd83ee03-17ee-4e52-8201-6374bfbd895b": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:38:10.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7263" for this suite. • [SLOW TEST:6.639 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":51,"skipped":717,"failed":0} SSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:38:10.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-6cfb87bd-774b-4339-a0f6-f6aa4dcf5714 in namespace container-probe-7983 May 12 12:38:16.669: INFO: Started pod liveness-6cfb87bd-774b-4339-a0f6-f6aa4dcf5714 in namespace container-probe-7983 STEP: checking the pod's current state and verifying that restartCount is present May 12 12:38:16.768: INFO: Initial restart count of pod liveness-6cfb87bd-774b-4339-a0f6-f6aa4dcf5714 is 0 May 12 12:38:36.958: INFO: Restart count of pod container-probe-7983/liveness-6cfb87bd-774b-4339-a0f6-f6aa4dcf5714 is now 1 (20.18986215s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:38:36.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7983" for this suite. • [SLOW TEST:26.580 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":52,"skipped":720,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:38:37.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:38:54.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5277" for this suite. • [SLOW TEST:17.075 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":53,"skipped":736,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:38:54.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 12 12:38:54.432: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 12 12:38:56.485: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:38:57.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9951" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":54,"skipped":745,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:38:57.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-80f7be40-c120-40ec-bda8-441317123bd8 STEP: Creating secret with name s-test-opt-upd-2b351a0f-016b-464b-8d7c-9fe75cdf7665 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-80f7be40-c120-40ec-bda8-441317123bd8 STEP: Updating secret s-test-opt-upd-2b351a0f-016b-464b-8d7c-9fe75cdf7665 STEP: Creating secret with name s-test-opt-create-906266d6-84db-4d81-a5aa-6369752438ef STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:39:11.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3624" for this suite. • [SLOW TEST:13.401 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":55,"skipped":754,"failed":0} [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:39:11.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 12 12:39:11.087: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-7126' May 12 12:39:11.200: INFO: stderr: "" May 12 12:39:11.200: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 12 12:39:16.251: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-7126 -o json' May 12 12:39:16.465: INFO: stderr: "" May 12 12:39:16.465: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-12T12:39:11Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-12T12:39:11Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.40\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-12T12:39:15Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-7126\",\n \"resourceVersion\": \"3721694\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7126/pods/e2e-test-httpd-pod\",\n \"uid\": \"b09deeaa-b63b-46e7-8330-865016ee4177\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-5292m\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"kali-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-5292m\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-5292m\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-12T12:39:11Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-12T12:39:15Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-12T12:39:15Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-12T12:39:11Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://6b89bedd70084318f94e38cc35a9bd93ca8f0885b59e6242f7a4596a49d126c8\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-12T12:39:14Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.18\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.40\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.40\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-12T12:39:11Z\"\n }\n}\n" STEP: replace the image in the pod May 12 12:39:16.465: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7126' May 12 12:39:16.882: INFO: stderr: "" May 12 12:39:16.882: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 May 12 12:39:17.026: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7126' May 12 12:39:26.254: INFO: stderr: "" May 12 12:39:26.254: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:39:26.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7126" for this suite. • [SLOW TEST:15.564 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":56,"skipped":754,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:39:26.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 12 12:39:27.007: INFO: Creating deployment "test-recreate-deployment" May 12 12:39:27.011: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 12 12:39:27.264: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 12 12:39:29.270: INFO: Waiting deployment "test-recreate-deployment" to complete May 12 12:39:29.272: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883967, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883967, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883967, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883967, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 12:39:31.276: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883967, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883967, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883967, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724883967, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 12:39:33.277: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 12 12:39:33.284: INFO: Updating deployment test-recreate-deployment May 12 12:39:33.284: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 May 12 12:39:34.368: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-3008 /apis/apps/v1/namespaces/deployment-3008/deployments/test-recreate-deployment 14fddd48-cc24-4336-99c0-5d48879c4a5e 3721831 2 2020-05-12 12:39:27 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-12 12:39:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-12 12:39:34 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005324418 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-12 12:39:33 +0000 UTC,LastTransitionTime:2020-05-12 12:39:33 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-05-12 12:39:34 +0000 UTC,LastTransitionTime:2020-05-12 12:39:27 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 12 12:39:34.407: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-3008 /apis/apps/v1/namespaces/deployment-3008/replicasets/test-recreate-deployment-d5667d9c7 584ed82a-bb06-4d16-8632-78735dcadb31 3721829 1 2020-05-12 12:39:33 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 14fddd48-cc24-4336-99c0-5d48879c4a5e 0xc005324930 0xc005324931}] [] [{kube-controller-manager Update apps/v1 2020-05-12 12:39:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 52 102 100 100 100 52 56 45 99 99 50 52 45 52 51 51 54 45 57 57 99 48 45 53 100 52 56 56 55 57 99 52 97 53 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0053249a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 12 12:39:34.407: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 12 12:39:34.407: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-74d98b5f7c deployment-3008 /apis/apps/v1/namespaces/deployment-3008/replicasets/test-recreate-deployment-74d98b5f7c a3ebd4d2-ae28-4b97-849c-afe4bbee6591 3721818 2 2020-05-12 12:39:27 +0000 UTC map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 14fddd48-cc24-4336-99c0-5d48879c4a5e 0xc005324837 0xc005324838}] [] [{kube-controller-manager Update apps/v1 2020-05-12 12:39:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 52 102 100 100 100 52 56 45 99 99 50 52 45 52 51 51 54 45 57 57 99 48 45 53 100 52 56 56 55 57 99 52 97 53 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 74d98b5f7c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0053248c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 12 12:39:34.457: INFO: Pod "test-recreate-deployment-d5667d9c7-48kmc" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-48kmc test-recreate-deployment-d5667d9c7- deployment-3008 /api/v1/namespaces/deployment-3008/pods/test-recreate-deployment-d5667d9c7-48kmc da82ad97-1604-42da-abbf-0f1fd80489ff 3721832 0 2020-05-12 12:39:33 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 584ed82a-bb06-4d16-8632-78735dcadb31 0xc005324e70 0xc005324e71}] [] [{kube-controller-manager Update v1 2020-05-12 12:39:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 56 52 101 100 56 50 97 45 98 98 48 54 45 52 100 49 54 45 56 54 51 50 45 55 56 55 51 53 100 99 97 100 98 51 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:39:34 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5m7wz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5m7wz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5m7wz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:39:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:39:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:39:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:39:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-12 12:39:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:39:34.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3008" for this suite. • [SLOW TEST:7.878 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":57,"skipped":774,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:39:34.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 12 12:39:34.858: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9cd5742c-b8a1-45c0-a2b4-e38e664683d7" in namespace "downward-api-1808" to be "Succeeded or Failed" May 12 12:39:34.903: INFO: Pod "downwardapi-volume-9cd5742c-b8a1-45c0-a2b4-e38e664683d7": Phase="Pending", Reason="", readiness=false. Elapsed: 45.031156ms May 12 12:39:37.020: INFO: Pod "downwardapi-volume-9cd5742c-b8a1-45c0-a2b4-e38e664683d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161797586s May 12 12:39:39.092: INFO: Pod "downwardapi-volume-9cd5742c-b8a1-45c0-a2b4-e38e664683d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.233673808s May 12 12:39:41.125: INFO: Pod "downwardapi-volume-9cd5742c-b8a1-45c0-a2b4-e38e664683d7": Phase="Running", Reason="", readiness=true. Elapsed: 6.267092602s May 12 12:39:43.129: INFO: Pod "downwardapi-volume-9cd5742c-b8a1-45c0-a2b4-e38e664683d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.271091595s STEP: Saw pod success May 12 12:39:43.129: INFO: Pod "downwardapi-volume-9cd5742c-b8a1-45c0-a2b4-e38e664683d7" satisfied condition "Succeeded or Failed" May 12 12:39:43.132: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-9cd5742c-b8a1-45c0-a2b4-e38e664683d7 container client-container: STEP: delete the pod May 12 12:39:43.172: INFO: Waiting for pod downwardapi-volume-9cd5742c-b8a1-45c0-a2b4-e38e664683d7 to disappear May 12 12:39:43.217: INFO: Pod downwardapi-volume-9cd5742c-b8a1-45c0-a2b4-e38e664683d7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:39:43.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1808" for this suite. • [SLOW TEST:8.760 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":58,"skipped":788,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:39:43.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:39:43.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1745" for this suite. STEP: Destroying namespace "nspatchtest-3cf25fb9-943f-4562-a81f-f9d22185e9a1-8270" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":59,"skipped":803,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:39:43.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test env composition May 12 12:39:43.920: INFO: Waiting up to 5m0s for pod "var-expansion-ec3f1683-6c3e-4b9d-a554-40802f10f4a1" in namespace "var-expansion-8148" to be "Succeeded or Failed" May 12 12:39:43.943: INFO: Pod "var-expansion-ec3f1683-6c3e-4b9d-a554-40802f10f4a1": Phase="Pending", Reason="", readiness=false. Elapsed: 22.952593ms May 12 12:39:46.038: INFO: Pod "var-expansion-ec3f1683-6c3e-4b9d-a554-40802f10f4a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117655807s May 12 12:39:48.134: INFO: Pod "var-expansion-ec3f1683-6c3e-4b9d-a554-40802f10f4a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.213827418s STEP: Saw pod success May 12 12:39:48.134: INFO: Pod "var-expansion-ec3f1683-6c3e-4b9d-a554-40802f10f4a1" satisfied condition "Succeeded or Failed" May 12 12:39:48.137: INFO: Trying to get logs from node kali-worker2 pod var-expansion-ec3f1683-6c3e-4b9d-a554-40802f10f4a1 container dapi-container: STEP: delete the pod May 12 12:39:48.195: INFO: Waiting for pod var-expansion-ec3f1683-6c3e-4b9d-a554-40802f10f4a1 to disappear May 12 12:39:48.202: INFO: Pod var-expansion-ec3f1683-6c3e-4b9d-a554-40802f10f4a1 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:39:48.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8148" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":60,"skipped":813,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:39:48.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:40:05.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5582" for this suite. • [SLOW TEST:17.247 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":61,"skipped":814,"failed":0} [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:40:05.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars May 12 12:40:05.622: INFO: Waiting up to 5m0s for pod "downward-api-86bb31d0-5011-4496-be98-e8c7e94b7118" in namespace "downward-api-2848" to be "Succeeded or Failed" May 12 12:40:05.684: INFO: Pod "downward-api-86bb31d0-5011-4496-be98-e8c7e94b7118": Phase="Pending", Reason="", readiness=false. Elapsed: 62.654803ms May 12 12:40:07.913: INFO: Pod "downward-api-86bb31d0-5011-4496-be98-e8c7e94b7118": Phase="Pending", Reason="", readiness=false. Elapsed: 2.291489278s May 12 12:40:09.918: INFO: Pod "downward-api-86bb31d0-5011-4496-be98-e8c7e94b7118": Phase="Pending", Reason="", readiness=false. Elapsed: 4.296113349s May 12 12:40:11.921: INFO: Pod "downward-api-86bb31d0-5011-4496-be98-e8c7e94b7118": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.299793081s STEP: Saw pod success May 12 12:40:11.922: INFO: Pod "downward-api-86bb31d0-5011-4496-be98-e8c7e94b7118" satisfied condition "Succeeded or Failed" May 12 12:40:11.925: INFO: Trying to get logs from node kali-worker pod downward-api-86bb31d0-5011-4496-be98-e8c7e94b7118 container dapi-container: STEP: delete the pod May 12 12:40:11.945: INFO: Waiting for pod downward-api-86bb31d0-5011-4496-be98-e8c7e94b7118 to disappear May 12 12:40:11.962: INFO: Pod downward-api-86bb31d0-5011-4496-be98-e8c7e94b7118 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:40:11.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2848" for this suite. • [SLOW TEST:6.513 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":62,"skipped":814,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:40:11.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 12 12:40:18.146: INFO: Waiting up to 5m0s for pod "client-envvars-e6dbfec3-1908-486e-9748-2a5ee717e6ea" in namespace "pods-253" to be "Succeeded or Failed" May 12 12:40:18.218: INFO: Pod "client-envvars-e6dbfec3-1908-486e-9748-2a5ee717e6ea": Phase="Pending", Reason="", readiness=false. Elapsed: 71.638543ms May 12 12:40:20.222: INFO: Pod "client-envvars-e6dbfec3-1908-486e-9748-2a5ee717e6ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075109444s May 12 12:40:22.226: INFO: Pod "client-envvars-e6dbfec3-1908-486e-9748-2a5ee717e6ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079272027s STEP: Saw pod success May 12 12:40:22.226: INFO: Pod "client-envvars-e6dbfec3-1908-486e-9748-2a5ee717e6ea" satisfied condition "Succeeded or Failed" May 12 12:40:22.228: INFO: Trying to get logs from node kali-worker pod client-envvars-e6dbfec3-1908-486e-9748-2a5ee717e6ea container env3cont: STEP: delete the pod May 12 12:40:22.693: INFO: Waiting for pod client-envvars-e6dbfec3-1908-486e-9748-2a5ee717e6ea to disappear May 12 12:40:22.756: INFO: Pod client-envvars-e6dbfec3-1908-486e-9748-2a5ee717e6ea no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:40:22.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-253" for this suite. • [SLOW TEST:10.796 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":63,"skipped":829,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:40:22.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 12:40:23.678: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 12:40:25.689: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724884023, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724884023, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724884023, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724884023, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 12:40:27.693: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724884023, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724884023, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724884023, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724884023, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 12:40:30.735: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 12 12:40:30.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9144-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:40:31.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-16" for this suite. STEP: Destroying namespace "webhook-16-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.258 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":64,"skipped":878,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:40:32.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:40:45.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5041" for this suite. • [SLOW TEST:13.829 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":65,"skipped":879,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:40:45.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating all guestbook components May 12 12:40:45.921: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 12 12:40:45.921: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5790' May 12 12:40:46.273: INFO: stderr: "" May 12 12:40:46.273: INFO: stdout: "service/agnhost-slave created\n" May 12 12:40:46.273: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 12 12:40:46.273: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5790' May 12 12:40:46.573: INFO: stderr: "" May 12 12:40:46.573: INFO: stdout: "service/agnhost-master created\n" May 12 12:40:46.574: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 12 12:40:46.574: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5790' May 12 12:40:46.883: INFO: stderr: "" May 12 12:40:46.883: INFO: stdout: "service/frontend created\n" May 12 12:40:46.883: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 12 12:40:46.883: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5790' May 12 12:40:47.177: INFO: stderr: "" May 12 12:40:47.178: INFO: stdout: "deployment.apps/frontend created\n" May 12 12:40:47.178: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 12 12:40:47.178: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5790' May 12 12:40:47.476: INFO: stderr: "" May 12 12:40:47.476: INFO: stdout: "deployment.apps/agnhost-master created\n" May 12 12:40:47.476: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 12 12:40:47.476: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5790' May 12 12:40:47.726: INFO: stderr: "" May 12 12:40:47.726: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 12 12:40:47.726: INFO: Waiting for all frontend pods to be Running. May 12 12:40:57.776: INFO: Waiting for frontend to serve content. May 12 12:40:57.784: INFO: Trying to add a new entry to the guestbook. May 12 12:40:57.792: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 12 12:40:57.798: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5790' May 12 12:40:57.928: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 12:40:57.928: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 12 12:40:57.928: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5790' May 12 12:40:58.343: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 12:40:58.343: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 12 12:40:58.343: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5790' May 12 12:40:58.735: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 12:40:58.735: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 12 12:40:58.735: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5790' May 12 12:40:58.878: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 12:40:58.878: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 12 12:40:58.878: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5790' May 12 12:40:59.140: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 12:40:59.140: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 12 12:40:59.140: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5790' May 12 12:40:59.573: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 12:40:59.573: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:40:59.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5790" for this suite. • [SLOW TEST:14.063 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":275,"completed":66,"skipped":901,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:40:59.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:41:21.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-771" for this suite. STEP: Destroying namespace "nsdeletetest-27" for this suite. May 12 12:41:22.164: INFO: Namespace nsdeletetest-27 was already deleted STEP: Destroying namespace "nsdeletetest-5802" for this suite. • [SLOW TEST:22.253 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":67,"skipped":927,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:41:22.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-914 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet May 12 12:41:23.381: INFO: Found 0 stateful pods, waiting for 3 May 12 12:41:33.452: INFO: Found 2 stateful pods, waiting for 3 May 12 12:41:43.385: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 12:41:43.385: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 12:41:43.385: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 12 12:41:43.395: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-914 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 12 12:41:43.655: INFO: stderr: "I0512 12:41:43.529095 1069 log.go:172] (0xc0000eabb0) (0xc000a42000) Create stream\nI0512 12:41:43.529374 1069 log.go:172] (0xc0000eabb0) (0xc000a42000) Stream added, broadcasting: 1\nI0512 12:41:43.530532 1069 log.go:172] (0xc0000eabb0) Reply frame received for 1\nI0512 12:41:43.530568 1069 log.go:172] (0xc0000eabb0) (0xc000843540) Create stream\nI0512 12:41:43.530584 1069 log.go:172] (0xc0000eabb0) (0xc000843540) Stream added, broadcasting: 3\nI0512 12:41:43.531362 1069 log.go:172] (0xc0000eabb0) Reply frame received for 3\nI0512 12:41:43.531387 1069 log.go:172] (0xc0000eabb0) (0xc0008435e0) Create stream\nI0512 12:41:43.531400 1069 log.go:172] (0xc0000eabb0) (0xc0008435e0) Stream added, broadcasting: 5\nI0512 12:41:43.531977 1069 log.go:172] (0xc0000eabb0) Reply frame received for 5\nI0512 12:41:43.589431 1069 log.go:172] (0xc0000eabb0) Data frame received for 5\nI0512 12:41:43.589462 1069 log.go:172] (0xc0008435e0) (5) Data frame handling\nI0512 12:41:43.589483 1069 log.go:172] (0xc0008435e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 12:41:43.648538 1069 log.go:172] (0xc0000eabb0) Data frame received for 3\nI0512 12:41:43.648567 1069 log.go:172] (0xc000843540) (3) Data frame handling\nI0512 12:41:43.648593 1069 log.go:172] (0xc000843540) (3) Data frame sent\nI0512 12:41:43.648744 1069 log.go:172] (0xc0000eabb0) Data frame received for 5\nI0512 12:41:43.648775 1069 log.go:172] (0xc0008435e0) (5) Data frame handling\nI0512 12:41:43.648804 1069 log.go:172] (0xc0000eabb0) Data frame received for 3\nI0512 12:41:43.648856 1069 log.go:172] (0xc000843540) (3) Data frame handling\nI0512 12:41:43.650433 1069 log.go:172] (0xc0000eabb0) Data frame received for 1\nI0512 12:41:43.650469 1069 log.go:172] (0xc000a42000) (1) Data frame handling\nI0512 12:41:43.650487 1069 log.go:172] (0xc000a42000) (1) Data frame sent\nI0512 12:41:43.650503 1069 log.go:172] (0xc0000eabb0) (0xc000a42000) Stream removed, broadcasting: 1\nI0512 12:41:43.650529 1069 log.go:172] (0xc0000eabb0) Go away received\nI0512 12:41:43.650834 1069 log.go:172] (0xc0000eabb0) (0xc000a42000) Stream removed, broadcasting: 1\nI0512 12:41:43.650860 1069 log.go:172] (0xc0000eabb0) (0xc000843540) Stream removed, broadcasting: 3\nI0512 12:41:43.650881 1069 log.go:172] (0xc0000eabb0) (0xc0008435e0) Stream removed, broadcasting: 5\n" May 12 12:41:43.655: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 12 12:41:43.655: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 12 12:41:53.726: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 12 12:42:04.131: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-914 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 12:42:04.478: INFO: stderr: "I0512 12:42:04.264617 1091 log.go:172] (0xc000970b00) (0xc0009ba0a0) Create stream\nI0512 12:42:04.264691 1091 log.go:172] (0xc000970b00) (0xc0009ba0a0) Stream added, broadcasting: 1\nI0512 12:42:04.266993 1091 log.go:172] (0xc000970b00) Reply frame received for 1\nI0512 12:42:04.267048 1091 log.go:172] (0xc000970b00) (0xc0009ae000) Create stream\nI0512 12:42:04.267072 1091 log.go:172] (0xc000970b00) (0xc0009ae000) Stream added, broadcasting: 3\nI0512 12:42:04.267931 1091 log.go:172] (0xc000970b00) Reply frame received for 3\nI0512 12:42:04.267966 1091 log.go:172] (0xc000970b00) (0xc0009ba140) Create stream\nI0512 12:42:04.267980 1091 log.go:172] (0xc000970b00) (0xc0009ba140) Stream added, broadcasting: 5\nI0512 12:42:04.269017 1091 log.go:172] (0xc000970b00) Reply frame received for 5\nI0512 12:42:04.329026 1091 log.go:172] (0xc000970b00) Data frame received for 5\nI0512 12:42:04.329052 1091 log.go:172] (0xc0009ba140) (5) Data frame handling\nI0512 12:42:04.329066 1091 log.go:172] (0xc0009ba140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0512 12:42:04.468668 1091 log.go:172] (0xc000970b00) Data frame received for 3\nI0512 12:42:04.468706 1091 log.go:172] (0xc0009ae000) (3) Data frame handling\nI0512 12:42:04.468722 1091 log.go:172] (0xc0009ae000) (3) Data frame sent\nI0512 12:42:04.468804 1091 log.go:172] (0xc000970b00) Data frame received for 3\nI0512 12:42:04.468816 1091 log.go:172] (0xc0009ae000) (3) Data frame handling\nI0512 12:42:04.469101 1091 log.go:172] (0xc000970b00) Data frame received for 5\nI0512 12:42:04.469395 1091 log.go:172] (0xc0009ba140) (5) Data frame handling\nI0512 12:42:04.472952 1091 log.go:172] (0xc000970b00) Data frame received for 1\nI0512 12:42:04.472973 1091 log.go:172] (0xc0009ba0a0) (1) Data frame handling\nI0512 12:42:04.472982 1091 log.go:172] (0xc0009ba0a0) (1) Data frame sent\nI0512 12:42:04.472995 1091 log.go:172] (0xc000970b00) (0xc0009ba0a0) Stream removed, broadcasting: 1\nI0512 12:42:04.473475 1091 log.go:172] (0xc000970b00) (0xc0009ba0a0) Stream removed, broadcasting: 1\nI0512 12:42:04.473504 1091 log.go:172] (0xc000970b00) (0xc0009ae000) Stream removed, broadcasting: 3\nI0512 12:42:04.473522 1091 log.go:172] (0xc000970b00) (0xc0009ba140) Stream removed, broadcasting: 5\n" May 12 12:42:04.479: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 12 12:42:04.479: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 12 12:42:34.560: INFO: Waiting for StatefulSet statefulset-914/ss2 to complete update STEP: Rolling back to a previous revision May 12 12:42:44.566: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-914 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 12 12:42:44.808: INFO: stderr: "I0512 12:42:44.686249 1111 log.go:172] (0xc000bb4160) (0xc00056c1e0) Create stream\nI0512 12:42:44.686299 1111 log.go:172] (0xc000bb4160) (0xc00056c1e0) Stream added, broadcasting: 1\nI0512 12:42:44.687735 1111 log.go:172] (0xc000bb4160) Reply frame received for 1\nI0512 12:42:44.687770 1111 log.go:172] (0xc000bb4160) (0xc000a70000) Create stream\nI0512 12:42:44.687786 1111 log.go:172] (0xc000bb4160) (0xc000a70000) Stream added, broadcasting: 3\nI0512 12:42:44.688652 1111 log.go:172] (0xc000bb4160) Reply frame received for 3\nI0512 12:42:44.688680 1111 log.go:172] (0xc000bb4160) (0xc000a700a0) Create stream\nI0512 12:42:44.688693 1111 log.go:172] (0xc000bb4160) (0xc000a700a0) Stream added, broadcasting: 5\nI0512 12:42:44.689658 1111 log.go:172] (0xc000bb4160) Reply frame received for 5\nI0512 12:42:44.759389 1111 log.go:172] (0xc000bb4160) Data frame received for 5\nI0512 12:42:44.759404 1111 log.go:172] (0xc000a700a0) (5) Data frame handling\nI0512 12:42:44.759409 1111 log.go:172] (0xc000a700a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 12:42:44.801864 1111 log.go:172] (0xc000bb4160) Data frame received for 5\nI0512 12:42:44.801913 1111 log.go:172] (0xc000a700a0) (5) Data frame handling\nI0512 12:42:44.801933 1111 log.go:172] (0xc000bb4160) Data frame received for 3\nI0512 12:42:44.801943 1111 log.go:172] (0xc000a70000) (3) Data frame handling\nI0512 12:42:44.801954 1111 log.go:172] (0xc000a70000) (3) Data frame sent\nI0512 12:42:44.801964 1111 log.go:172] (0xc000bb4160) Data frame received for 3\nI0512 12:42:44.801978 1111 log.go:172] (0xc000a70000) (3) Data frame handling\nI0512 12:42:44.803520 1111 log.go:172] (0xc000bb4160) Data frame received for 1\nI0512 12:42:44.803532 1111 log.go:172] (0xc00056c1e0) (1) Data frame handling\nI0512 12:42:44.803542 1111 log.go:172] (0xc00056c1e0) (1) Data frame sent\nI0512 12:42:44.803551 1111 log.go:172] (0xc000bb4160) (0xc00056c1e0) Stream removed, broadcasting: 1\nI0512 12:42:44.803557 1111 log.go:172] (0xc000bb4160) Go away received\nI0512 12:42:44.803889 1111 log.go:172] (0xc000bb4160) (0xc00056c1e0) Stream removed, broadcasting: 1\nI0512 12:42:44.803905 1111 log.go:172] (0xc000bb4160) (0xc000a70000) Stream removed, broadcasting: 3\nI0512 12:42:44.803914 1111 log.go:172] (0xc000bb4160) (0xc000a700a0) Stream removed, broadcasting: 5\n" May 12 12:42:44.808: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 12 12:42:44.808: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 12 12:42:54.844: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 12 12:43:04.887: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-914 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 12:43:05.079: INFO: stderr: "I0512 12:43:05.010111 1132 log.go:172] (0xc00053db80) (0xc0006bd540) Create stream\nI0512 12:43:05.010169 1132 log.go:172] (0xc00053db80) (0xc0006bd540) Stream added, broadcasting: 1\nI0512 12:43:05.012526 1132 log.go:172] (0xc00053db80) Reply frame received for 1\nI0512 12:43:05.012559 1132 log.go:172] (0xc00053db80) (0xc000a12000) Create stream\nI0512 12:43:05.012580 1132 log.go:172] (0xc00053db80) (0xc000a12000) Stream added, broadcasting: 3\nI0512 12:43:05.013605 1132 log.go:172] (0xc00053db80) Reply frame received for 3\nI0512 12:43:05.013647 1132 log.go:172] (0xc00053db80) (0xc0006bd5e0) Create stream\nI0512 12:43:05.013667 1132 log.go:172] (0xc00053db80) (0xc0006bd5e0) Stream added, broadcasting: 5\nI0512 12:43:05.014511 1132 log.go:172] (0xc00053db80) Reply frame received for 5\nI0512 12:43:05.073012 1132 log.go:172] (0xc00053db80) Data frame received for 5\nI0512 12:43:05.073058 1132 log.go:172] (0xc0006bd5e0) (5) Data frame handling\nI0512 12:43:05.073079 1132 log.go:172] (0xc0006bd5e0) (5) Data frame sent\nI0512 12:43:05.073099 1132 log.go:172] (0xc00053db80) Data frame received for 5\nI0512 12:43:05.073298 1132 log.go:172] (0xc0006bd5e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0512 12:43:05.073327 1132 log.go:172] (0xc00053db80) Data frame received for 3\nI0512 12:43:05.073342 1132 log.go:172] (0xc000a12000) (3) Data frame handling\nI0512 12:43:05.073352 1132 log.go:172] (0xc000a12000) (3) Data frame sent\nI0512 12:43:05.073361 1132 log.go:172] (0xc00053db80) Data frame received for 3\nI0512 12:43:05.073369 1132 log.go:172] (0xc000a12000) (3) Data frame handling\nI0512 12:43:05.074606 1132 log.go:172] (0xc00053db80) Data frame received for 1\nI0512 12:43:05.074633 1132 log.go:172] (0xc0006bd540) (1) Data frame handling\nI0512 12:43:05.074659 1132 log.go:172] (0xc0006bd540) (1) Data frame sent\nI0512 12:43:05.074678 1132 log.go:172] (0xc00053db80) (0xc0006bd540) Stream removed, broadcasting: 1\nI0512 12:43:05.074697 1132 log.go:172] (0xc00053db80) Go away received\nI0512 12:43:05.074991 1132 log.go:172] (0xc00053db80) (0xc0006bd540) Stream removed, broadcasting: 1\nI0512 12:43:05.075005 1132 log.go:172] (0xc00053db80) (0xc000a12000) Stream removed, broadcasting: 3\nI0512 12:43:05.075011 1132 log.go:172] (0xc00053db80) (0xc0006bd5e0) Stream removed, broadcasting: 5\n" May 12 12:43:05.079: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 12 12:43:05.079: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 12 12:43:35.100: INFO: Waiting for StatefulSet statefulset-914/ss2 to complete update May 12 12:43:35.101: INFO: Waiting for Pod statefulset-914/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 12 12:43:45.109: INFO: Waiting for StatefulSet statefulset-914/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 May 12 12:43:55.109: INFO: Deleting all statefulset in ns statefulset-914 May 12 12:43:55.111: INFO: Scaling statefulset ss2 to 0 May 12 12:44:25.198: INFO: Waiting for statefulset status.replicas updated to 0 May 12 12:44:25.201: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:44:25.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-914" for this suite. • [SLOW TEST:183.170 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":68,"skipped":950,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:44:25.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 12 12:44:25.693: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8282 /api/v1/namespaces/watch-8282/configmaps/e2e-watch-test-watch-closed 045f2fdc-6770-48be-b93c-eda10289dccd 3723529 0 2020-05-12 12:44:25 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-12 12:44:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 12 12:44:25.693: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8282 /api/v1/namespaces/watch-8282/configmaps/e2e-watch-test-watch-closed 045f2fdc-6770-48be-b93c-eda10289dccd 3723532 0 2020-05-12 12:44:25 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-12 12:44:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 12 12:44:25.733: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8282 /api/v1/namespaces/watch-8282/configmaps/e2e-watch-test-watch-closed 045f2fdc-6770-48be-b93c-eda10289dccd 3723533 0 2020-05-12 12:44:25 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-12 12:44:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 12 12:44:25.734: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8282 /api/v1/namespaces/watch-8282/configmaps/e2e-watch-test-watch-closed 045f2fdc-6770-48be-b93c-eda10289dccd 3723534 0 2020-05-12 12:44:25 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-12 12:44:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:44:25.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8282" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":69,"skipped":958,"failed":0} SS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:44:25.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-0fdd3e46-36da-4c48-87ea-07b75098ada4 in namespace container-probe-7171 May 12 12:44:32.909: INFO: Started pod busybox-0fdd3e46-36da-4c48-87ea-07b75098ada4 in namespace container-probe-7171 STEP: checking the pod's current state and verifying that restartCount is present May 12 12:44:32.912: INFO: Initial restart count of pod busybox-0fdd3e46-36da-4c48-87ea-07b75098ada4 is 0 May 12 12:45:27.188: INFO: Restart count of pod container-probe-7171/busybox-0fdd3e46-36da-4c48-87ea-07b75098ada4 is now 1 (54.275757633s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:45:27.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7171" for this suite. • [SLOW TEST:61.533 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":70,"skipped":960,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:45:27.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 12:45:29.135: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 12:45:31.159: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724884329, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724884329, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724884329, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724884329, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 12:45:34.245: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 12 12:45:34.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6992-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:45:35.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-269" for this suite. STEP: Destroying namespace "webhook-269-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.324 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":71,"skipped":986,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:45:35.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-4758 STEP: creating a selector STEP: Creating the service pods in kubernetes May 12 12:45:35.814: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 12 12:45:35.969: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 12 12:45:37.973: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 12 12:45:39.972: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 12 12:45:41.973: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 12:45:43.973: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 12:45:45.972: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 12:45:47.972: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 12:45:49.973: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 12:45:51.972: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 12:45:53.973: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 12:45:55.972: INFO: The status of Pod netserver-0 is Running (Ready = true) May 12 12:45:55.978: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 12 12:46:00.027: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.248:8080/dial?request=hostname&protocol=http&host=10.244.2.247&port=8080&tries=1'] Namespace:pod-network-test-4758 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 12:46:00.027: INFO: >>> kubeConfig: /root/.kube/config I0512 12:46:00.065975 7 log.go:172] (0xc002b70c60) (0xc001b185a0) Create stream I0512 12:46:00.066019 7 log.go:172] (0xc002b70c60) (0xc001b185a0) Stream added, broadcasting: 1 I0512 12:46:00.067817 7 log.go:172] (0xc002b70c60) Reply frame received for 1 I0512 12:46:00.067854 7 log.go:172] (0xc002b70c60) (0xc001e6ba40) Create stream I0512 12:46:00.067867 7 log.go:172] (0xc002b70c60) (0xc001e6ba40) Stream added, broadcasting: 3 I0512 12:46:00.068925 7 log.go:172] (0xc002b70c60) Reply frame received for 3 I0512 12:46:00.068970 7 log.go:172] (0xc002b70c60) (0xc001eb41e0) Create stream I0512 12:46:00.068991 7 log.go:172] (0xc002b70c60) (0xc001eb41e0) Stream added, broadcasting: 5 I0512 12:46:00.070092 7 log.go:172] (0xc002b70c60) Reply frame received for 5 I0512 12:46:00.139923 7 log.go:172] (0xc002b70c60) Data frame received for 3 I0512 12:46:00.139962 7 log.go:172] (0xc001e6ba40) (3) Data frame handling I0512 12:46:00.139987 7 log.go:172] (0xc001e6ba40) (3) Data frame sent I0512 12:46:00.140388 7 log.go:172] (0xc002b70c60) Data frame received for 3 I0512 12:46:00.140421 7 log.go:172] (0xc001e6ba40) (3) Data frame handling I0512 12:46:00.140454 7 log.go:172] (0xc002b70c60) Data frame received for 5 I0512 12:46:00.140479 7 log.go:172] (0xc001eb41e0) (5) Data frame handling I0512 12:46:00.142262 7 log.go:172] (0xc002b70c60) Data frame received for 1 I0512 12:46:00.142284 7 log.go:172] (0xc001b185a0) (1) Data frame handling I0512 12:46:00.142300 7 log.go:172] (0xc001b185a0) (1) Data frame sent I0512 12:46:00.142322 7 log.go:172] (0xc002b70c60) (0xc001b185a0) Stream removed, broadcasting: 1 I0512 12:46:00.142336 7 log.go:172] (0xc002b70c60) Go away received I0512 12:46:00.142453 7 log.go:172] (0xc002b70c60) (0xc001b185a0) Stream removed, broadcasting: 1 I0512 12:46:00.142482 7 log.go:172] (0xc002b70c60) (0xc001e6ba40) Stream removed, broadcasting: 3 I0512 12:46:00.142505 7 log.go:172] (0xc002b70c60) (0xc001eb41e0) Stream removed, broadcasting: 5 May 12 12:46:00.142: INFO: Waiting for responses: map[] May 12 12:46:00.146: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.248:8080/dial?request=hostname&protocol=http&host=10.244.1.55&port=8080&tries=1'] Namespace:pod-network-test-4758 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 12:46:00.146: INFO: >>> kubeConfig: /root/.kube/config I0512 12:46:00.177440 7 log.go:172] (0xc002fd04d0) (0xc001eb4aa0) Create stream I0512 12:46:00.177490 7 log.go:172] (0xc002fd04d0) (0xc001eb4aa0) Stream added, broadcasting: 1 I0512 12:46:00.186610 7 log.go:172] (0xc002fd04d0) Reply frame received for 1 I0512 12:46:00.186662 7 log.go:172] (0xc002fd04d0) (0xc001b18640) Create stream I0512 12:46:00.186679 7 log.go:172] (0xc002fd04d0) (0xc001b18640) Stream added, broadcasting: 3 I0512 12:46:00.187701 7 log.go:172] (0xc002fd04d0) Reply frame received for 3 I0512 12:46:00.187722 7 log.go:172] (0xc002fd04d0) (0xc001b186e0) Create stream I0512 12:46:00.187731 7 log.go:172] (0xc002fd04d0) (0xc001b186e0) Stream added, broadcasting: 5 I0512 12:46:00.188730 7 log.go:172] (0xc002fd04d0) Reply frame received for 5 I0512 12:46:00.265440 7 log.go:172] (0xc002fd04d0) Data frame received for 3 I0512 12:46:00.265487 7 log.go:172] (0xc001b18640) (3) Data frame handling I0512 12:46:00.265513 7 log.go:172] (0xc001b18640) (3) Data frame sent I0512 12:46:00.265700 7 log.go:172] (0xc002fd04d0) Data frame received for 3 I0512 12:46:00.265713 7 log.go:172] (0xc001b18640) (3) Data frame handling I0512 12:46:00.265952 7 log.go:172] (0xc002fd04d0) Data frame received for 5 I0512 12:46:00.265980 7 log.go:172] (0xc001b186e0) (5) Data frame handling I0512 12:46:00.267383 7 log.go:172] (0xc002fd04d0) Data frame received for 1 I0512 12:46:00.267394 7 log.go:172] (0xc001eb4aa0) (1) Data frame handling I0512 12:46:00.267400 7 log.go:172] (0xc001eb4aa0) (1) Data frame sent I0512 12:46:00.267407 7 log.go:172] (0xc002fd04d0) (0xc001eb4aa0) Stream removed, broadcasting: 1 I0512 12:46:00.267497 7 log.go:172] (0xc002fd04d0) (0xc001eb4aa0) Stream removed, broadcasting: 1 I0512 12:46:00.267514 7 log.go:172] (0xc002fd04d0) (0xc001b18640) Stream removed, broadcasting: 3 I0512 12:46:00.267643 7 log.go:172] (0xc002fd04d0) (0xc001b186e0) Stream removed, broadcasting: 5 May 12 12:46:00.267: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:46:00.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4758" for this suite. • [SLOW TEST:24.557 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":72,"skipped":1010,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:46:00.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service nodeport-service with the type=NodePort in namespace services-2266 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2266 STEP: creating replication controller externalsvc in namespace services-2266 I0512 12:46:00.537596 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-2266, replica count: 2 I0512 12:46:03.588129 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 12:46:06.588368 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 12 12:46:07.151: INFO: Creating new exec pod May 12 12:46:11.372: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-2266 execpodpnzkk -- /bin/sh -x -c nslookup nodeport-service' May 12 12:46:15.390: INFO: stderr: "I0512 12:46:15.315530 1153 log.go:172] (0xc00003a6e0) (0xc000011360) Create stream\nI0512 12:46:15.315558 1153 log.go:172] (0xc00003a6e0) (0xc000011360) Stream added, broadcasting: 1\nI0512 12:46:15.317811 1153 log.go:172] (0xc00003a6e0) Reply frame received for 1\nI0512 12:46:15.317852 1153 log.go:172] (0xc00003a6e0) (0xc0004c66e0) Create stream\nI0512 12:46:15.317863 1153 log.go:172] (0xc00003a6e0) (0xc0004c66e0) Stream added, broadcasting: 3\nI0512 12:46:15.318819 1153 log.go:172] (0xc00003a6e0) Reply frame received for 3\nI0512 12:46:15.318872 1153 log.go:172] (0xc00003a6e0) (0xc0005d6280) Create stream\nI0512 12:46:15.318889 1153 log.go:172] (0xc00003a6e0) (0xc0005d6280) Stream added, broadcasting: 5\nI0512 12:46:15.319828 1153 log.go:172] (0xc00003a6e0) Reply frame received for 5\nI0512 12:46:15.377537 1153 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0512 12:46:15.377564 1153 log.go:172] (0xc0005d6280) (5) Data frame handling\nI0512 12:46:15.377583 1153 log.go:172] (0xc0005d6280) (5) Data frame sent\n+ nslookup nodeport-service\nI0512 12:46:15.383740 1153 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 12:46:15.383752 1153 log.go:172] (0xc0004c66e0) (3) Data frame handling\nI0512 12:46:15.383765 1153 log.go:172] (0xc0004c66e0) (3) Data frame sent\nI0512 12:46:15.384279 1153 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 12:46:15.384302 1153 log.go:172] (0xc0004c66e0) (3) Data frame handling\nI0512 12:46:15.384318 1153 log.go:172] (0xc0004c66e0) (3) Data frame sent\nI0512 12:46:15.384513 1153 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 12:46:15.384528 1153 log.go:172] (0xc0004c66e0) (3) Data frame handling\nI0512 12:46:15.384540 1153 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0512 12:46:15.384545 1153 log.go:172] (0xc0005d6280) (5) Data frame handling\nI0512 12:46:15.385816 1153 log.go:172] (0xc00003a6e0) Data frame received for 1\nI0512 12:46:15.385829 1153 log.go:172] (0xc000011360) (1) Data frame handling\nI0512 12:46:15.385838 1153 log.go:172] (0xc000011360) (1) Data frame sent\nI0512 12:46:15.385854 1153 log.go:172] (0xc00003a6e0) (0xc000011360) Stream removed, broadcasting: 1\nI0512 12:46:15.386016 1153 log.go:172] (0xc00003a6e0) Go away received\nI0512 12:46:15.386077 1153 log.go:172] (0xc00003a6e0) (0xc000011360) Stream removed, broadcasting: 1\nI0512 12:46:15.386090 1153 log.go:172] (0xc00003a6e0) (0xc0004c66e0) Stream removed, broadcasting: 3\nI0512 12:46:15.386101 1153 log.go:172] (0xc00003a6e0) (0xc0005d6280) Stream removed, broadcasting: 5\n" May 12 12:46:15.390: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-2266.svc.cluster.local\tcanonical name = externalsvc.services-2266.svc.cluster.local.\nName:\texternalsvc.services-2266.svc.cluster.local\nAddress: 10.109.47.145\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2266, will wait for the garbage collector to delete the pods May 12 12:46:15.447: INFO: Deleting ReplicationController externalsvc took: 4.984899ms May 12 12:46:15.748: INFO: Terminating ReplicationController externalsvc pods took: 300.265531ms May 12 12:46:24.047: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:46:24.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2266" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:23.993 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":73,"skipped":1020,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:46:24.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-cf8a0fda-e9c9-4b65-bce6-9b4ba1ef929e STEP: Creating a pod to test consume configMaps May 12 12:46:24.357: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9e334a99-84b5-4c79-8815-9546ad949b72" in namespace "projected-3199" to be "Succeeded or Failed" May 12 12:46:24.379: INFO: Pod "pod-projected-configmaps-9e334a99-84b5-4c79-8815-9546ad949b72": Phase="Pending", Reason="", readiness=false. Elapsed: 21.829827ms May 12 12:46:26.383: INFO: Pod "pod-projected-configmaps-9e334a99-84b5-4c79-8815-9546ad949b72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025904116s May 12 12:46:28.387: INFO: Pod "pod-projected-configmaps-9e334a99-84b5-4c79-8815-9546ad949b72": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029983196s May 12 12:46:30.391: INFO: Pod "pod-projected-configmaps-9e334a99-84b5-4c79-8815-9546ad949b72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034432326s STEP: Saw pod success May 12 12:46:30.391: INFO: Pod "pod-projected-configmaps-9e334a99-84b5-4c79-8815-9546ad949b72" satisfied condition "Succeeded or Failed" May 12 12:46:30.394: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-9e334a99-84b5-4c79-8815-9546ad949b72 container projected-configmap-volume-test: STEP: delete the pod May 12 12:46:30.665: INFO: Waiting for pod pod-projected-configmaps-9e334a99-84b5-4c79-8815-9546ad949b72 to disappear May 12 12:46:30.703: INFO: Pod pod-projected-configmaps-9e334a99-84b5-4c79-8815-9546ad949b72 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:46:30.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3199" for this suite. • [SLOW TEST:6.441 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":74,"skipped":1025,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:46:30.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-aa1ff31f-c7e4-4a44-a67c-cac74b460ae5 STEP: Creating a pod to test consume configMaps May 12 12:46:31.011: INFO: Waiting up to 5m0s for pod "pod-configmaps-afad9dfd-f324-4364-b1a6-b438429cab65" in namespace "configmap-5920" to be "Succeeded or Failed" May 12 12:46:31.015: INFO: Pod "pod-configmaps-afad9dfd-f324-4364-b1a6-b438429cab65": Phase="Pending", Reason="", readiness=false. Elapsed: 3.380029ms May 12 12:46:33.018: INFO: Pod "pod-configmaps-afad9dfd-f324-4364-b1a6-b438429cab65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007041987s May 12 12:46:35.022: INFO: Pod "pod-configmaps-afad9dfd-f324-4364-b1a6-b438429cab65": Phase="Running", Reason="", readiness=true. Elapsed: 4.010589683s May 12 12:46:37.025: INFO: Pod "pod-configmaps-afad9dfd-f324-4364-b1a6-b438429cab65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013907993s STEP: Saw pod success May 12 12:46:37.025: INFO: Pod "pod-configmaps-afad9dfd-f324-4364-b1a6-b438429cab65" satisfied condition "Succeeded or Failed" May 12 12:46:37.028: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-afad9dfd-f324-4364-b1a6-b438429cab65 container configmap-volume-test: STEP: delete the pod May 12 12:46:37.071: INFO: Waiting for pod pod-configmaps-afad9dfd-f324-4364-b1a6-b438429cab65 to disappear May 12 12:46:37.130: INFO: Pod pod-configmaps-afad9dfd-f324-4364-b1a6-b438429cab65 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:46:37.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5920" for this suite. • [SLOW TEST:6.426 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":75,"skipped":1028,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:46:37.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 12:46:38.018: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 12:46:40.253: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724884398, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724884398, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724884398, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724884397, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 12:46:43.320: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 12 12:46:43.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:46:44.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6794" for this suite. STEP: Destroying namespace "webhook-6794-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.826 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":76,"skipped":1070,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:46:44.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-c4rs STEP: Creating a pod to test atomic-volume-subpath May 12 12:46:45.491: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-c4rs" in namespace "subpath-7981" to be "Succeeded or Failed" May 12 12:46:45.531: INFO: Pod "pod-subpath-test-configmap-c4rs": Phase="Pending", Reason="", readiness=false. Elapsed: 39.953183ms May 12 12:46:47.535: INFO: Pod "pod-subpath-test-configmap-c4rs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043453243s May 12 12:46:49.537: INFO: Pod "pod-subpath-test-configmap-c4rs": Phase="Running", Reason="", readiness=true. Elapsed: 4.046303012s May 12 12:46:51.542: INFO: Pod "pod-subpath-test-configmap-c4rs": Phase="Running", Reason="", readiness=true. Elapsed: 6.051201004s May 12 12:46:53.546: INFO: Pod "pod-subpath-test-configmap-c4rs": Phase="Running", Reason="", readiness=true. Elapsed: 8.054436932s May 12 12:46:55.548: INFO: Pod "pod-subpath-test-configmap-c4rs": Phase="Running", Reason="", readiness=true. Elapsed: 10.057415211s May 12 12:46:57.552: INFO: Pod "pod-subpath-test-configmap-c4rs": Phase="Running", Reason="", readiness=true. Elapsed: 12.060748205s May 12 12:46:59.555: INFO: Pod "pod-subpath-test-configmap-c4rs": Phase="Running", Reason="", readiness=true. Elapsed: 14.063820133s May 12 12:47:01.559: INFO: Pod "pod-subpath-test-configmap-c4rs": Phase="Running", Reason="", readiness=true. Elapsed: 16.067568488s May 12 12:47:03.562: INFO: Pod "pod-subpath-test-configmap-c4rs": Phase="Running", Reason="", readiness=true. Elapsed: 18.0706366s May 12 12:47:05.565: INFO: Pod "pod-subpath-test-configmap-c4rs": Phase="Running", Reason="", readiness=true. Elapsed: 20.073507514s May 12 12:47:07.567: INFO: Pod "pod-subpath-test-configmap-c4rs": Phase="Running", Reason="", readiness=true. Elapsed: 22.076367044s May 12 12:47:09.572: INFO: Pod "pod-subpath-test-configmap-c4rs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.081389237s STEP: Saw pod success May 12 12:47:09.573: INFO: Pod "pod-subpath-test-configmap-c4rs" satisfied condition "Succeeded or Failed" May 12 12:47:09.576: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-configmap-c4rs container test-container-subpath-configmap-c4rs: STEP: delete the pod May 12 12:47:09.642: INFO: Waiting for pod pod-subpath-test-configmap-c4rs to disappear May 12 12:47:09.654: INFO: Pod pod-subpath-test-configmap-c4rs no longer exists STEP: Deleting pod pod-subpath-test-configmap-c4rs May 12 12:47:09.654: INFO: Deleting pod "pod-subpath-test-configmap-c4rs" in namespace "subpath-7981" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:47:09.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7981" for this suite. • [SLOW TEST:24.698 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":77,"skipped":1107,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:47:09.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD May 12 12:47:09.768: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:47:25.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6531" for this suite. • [SLOW TEST:15.875 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":78,"skipped":1109,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:47:25.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 12 12:47:25.617: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 12 12:47:36.335: INFO: >>> kubeConfig: /root/.kube/config May 12 12:47:39.338: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:47:50.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3676" for this suite. • [SLOW TEST:24.563 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":79,"skipped":1143,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:47:50.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD May 12 12:47:50.320: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:48:07.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5971" for this suite. • [SLOW TEST:17.439 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":80,"skipped":1146,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:48:07.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC May 12 12:48:07.663: INFO: namespace kubectl-6963 May 12 12:48:07.663: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6963' May 12 12:48:08.001: INFO: stderr: "" May 12 12:48:08.001: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 12 12:48:09.006: INFO: Selector matched 1 pods for map[app:agnhost] May 12 12:48:09.006: INFO: Found 0 / 1 May 12 12:48:10.006: INFO: Selector matched 1 pods for map[app:agnhost] May 12 12:48:10.006: INFO: Found 0 / 1 May 12 12:48:11.005: INFO: Selector matched 1 pods for map[app:agnhost] May 12 12:48:11.005: INFO: Found 0 / 1 May 12 12:48:12.005: INFO: Selector matched 1 pods for map[app:agnhost] May 12 12:48:12.005: INFO: Found 0 / 1 May 12 12:48:13.335: INFO: Selector matched 1 pods for map[app:agnhost] May 12 12:48:13.335: INFO: Found 1 / 1 May 12 12:48:13.335: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 12 12:48:13.339: INFO: Selector matched 1 pods for map[app:agnhost] May 12 12:48:13.339: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 12 12:48:13.339: INFO: wait on agnhost-master startup in kubectl-6963 May 12 12:48:13.339: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs agnhost-master-8bwvb agnhost-master --namespace=kubectl-6963' May 12 12:48:13.780: INFO: stderr: "" May 12 12:48:13.780: INFO: stdout: "Paused\n" STEP: exposing RC May 12 12:48:13.780: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6963' May 12 12:48:14.844: INFO: stderr: "" May 12 12:48:14.844: INFO: stdout: "service/rm2 exposed\n" May 12 12:48:15.255: INFO: Service rm2 in namespace kubectl-6963 found. STEP: exposing service May 12 12:48:17.261: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6963' May 12 12:48:17.408: INFO: stderr: "" May 12 12:48:17.408: INFO: stdout: "service/rm3 exposed\n" May 12 12:48:17.443: INFO: Service rm3 in namespace kubectl-6963 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:48:19.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6963" for this suite. • [SLOW TEST:11.922 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":275,"completed":81,"skipped":1153,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:48:19.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs May 12 12:48:19.696: INFO: Waiting up to 5m0s for pod "pod-5899c534-82ad-4746-8471-f1b8559409fc" in namespace "emptydir-12" to be "Succeeded or Failed" May 12 12:48:19.724: INFO: Pod "pod-5899c534-82ad-4746-8471-f1b8559409fc": Phase="Pending", Reason="", readiness=false. Elapsed: 27.820196ms May 12 12:48:21.727: INFO: Pod "pod-5899c534-82ad-4746-8471-f1b8559409fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031108111s May 12 12:48:23.730: INFO: Pod "pod-5899c534-82ad-4746-8471-f1b8559409fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033999883s STEP: Saw pod success May 12 12:48:23.730: INFO: Pod "pod-5899c534-82ad-4746-8471-f1b8559409fc" satisfied condition "Succeeded or Failed" May 12 12:48:23.732: INFO: Trying to get logs from node kali-worker2 pod pod-5899c534-82ad-4746-8471-f1b8559409fc container test-container: STEP: delete the pod May 12 12:48:23.760: INFO: Waiting for pod pod-5899c534-82ad-4746-8471-f1b8559409fc to disappear May 12 12:48:23.776: INFO: Pod pod-5899c534-82ad-4746-8471-f1b8559409fc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:48:23.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-12" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":82,"skipped":1205,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:48:23.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:48:24.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9702" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":83,"skipped":1221,"failed":0} ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:48:24.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars May 12 12:48:24.420: INFO: Waiting up to 5m0s for pod "downward-api-289bfe00-bd6e-41d0-b8e0-408778785f04" in namespace "downward-api-4119" to be "Succeeded or Failed" May 12 12:48:24.443: INFO: Pod "downward-api-289bfe00-bd6e-41d0-b8e0-408778785f04": Phase="Pending", Reason="", readiness=false. Elapsed: 23.67155ms May 12 12:48:26.450: INFO: Pod "downward-api-289bfe00-bd6e-41d0-b8e0-408778785f04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029825288s May 12 12:48:28.495: INFO: Pod "downward-api-289bfe00-bd6e-41d0-b8e0-408778785f04": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075257s May 12 12:48:30.499: INFO: Pod "downward-api-289bfe00-bd6e-41d0-b8e0-408778785f04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.079520007s STEP: Saw pod success May 12 12:48:30.499: INFO: Pod "downward-api-289bfe00-bd6e-41d0-b8e0-408778785f04" satisfied condition "Succeeded or Failed" May 12 12:48:30.503: INFO: Trying to get logs from node kali-worker pod downward-api-289bfe00-bd6e-41d0-b8e0-408778785f04 container dapi-container: STEP: delete the pod May 12 12:48:30.555: INFO: Waiting for pod downward-api-289bfe00-bd6e-41d0-b8e0-408778785f04 to disappear May 12 12:48:30.604: INFO: Pod downward-api-289bfe00-bd6e-41d0-b8e0-408778785f04 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:48:30.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4119" for this suite. • [SLOW TEST:6.286 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":84,"skipped":1221,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:48:30.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 12 12:48:30.700: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:48:31.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5893" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":275,"completed":85,"skipped":1234,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:48:31.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7986.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7986.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7986.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7986.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7986.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7986.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 12:48:40.134: INFO: DNS probes using dns-7986/dns-test-99283f1a-65f7-4db1-8057-789e53d21f16 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:48:40.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7986" for this suite. • [SLOW TEST:8.726 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":86,"skipped":1277,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:48:40.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 12 12:48:47.362: INFO: &Pod{ObjectMeta:{send-events-610bd097-f152-46ab-93c3-57dee1d6d273 events-2378 /api/v1/namespaces/events-2378/pods/send-events-610bd097-f152-46ab-93c3-57dee1d6d273 9f42745f-6342-49d5-8d2e-e2db3767e680 3725034 0 2020-05-12 12:48:41 +0000 UTC map[name:foo time:181843412] map[] [] [] [{e2e.test Update v1 2020-05-12 12:48:41 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 116 105 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 112 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 99 111 110 116 97 105 110 101 114 80 111 114 116 92 34 58 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 80 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 125 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:48:46 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 53 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5nvkw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5nvkw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5nvkw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:48:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:48:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:48:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:48:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.253,StartTime:2020-05-12 12:48:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 12:48:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://2b0af277026dddd2edab53d749b2109385db1064b635746fc08a84a2ca698589,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.253,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 12 12:48:49.367: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 12 12:48:51.371: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:48:51.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2378" for this suite. • [SLOW TEST:10.790 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":275,"completed":87,"skipped":1294,"failed":0} [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:48:51.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:48:57.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1178" for this suite. STEP: Destroying namespace "nsdeletetest-1886" for this suite. May 12 12:48:57.925: INFO: Namespace nsdeletetest-1886 was already deleted STEP: Destroying namespace "nsdeletetest-4696" for this suite. • [SLOW TEST:6.498 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":88,"skipped":1294,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:48:57.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 12 12:48:58.007: INFO: >>> kubeConfig: /root/.kube/config May 12 12:49:01.006: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:49:14.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1360" for this suite. • [SLOW TEST:16.717 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":89,"skipped":1301,"failed":0} SSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:49:14.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 12 12:49:15.380: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-32cf32a8-58f3-449e-919f-cd74ef306f2c" in namespace "security-context-test-9825" to be "Succeeded or Failed" May 12 12:49:15.419: INFO: Pod "alpine-nnp-false-32cf32a8-58f3-449e-919f-cd74ef306f2c": Phase="Pending", Reason="", readiness=false. Elapsed: 38.464128ms May 12 12:49:17.575: INFO: Pod "alpine-nnp-false-32cf32a8-58f3-449e-919f-cd74ef306f2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19439277s May 12 12:49:19.578: INFO: Pod "alpine-nnp-false-32cf32a8-58f3-449e-919f-cd74ef306f2c": Phase="Running", Reason="", readiness=true. Elapsed: 4.197843034s May 12 12:49:21.587: INFO: Pod "alpine-nnp-false-32cf32a8-58f3-449e-919f-cd74ef306f2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.206436116s May 12 12:49:21.587: INFO: Pod "alpine-nnp-false-32cf32a8-58f3-449e-919f-cd74ef306f2c" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:49:21.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9825" for this suite. • [SLOW TEST:6.956 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":90,"skipped":1306,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:49:21.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting the proxy server May 12 12:49:22.122: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:49:22.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3089" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":275,"completed":91,"skipped":1315,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:49:22.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs May 12 12:49:22.894: INFO: Waiting up to 5m0s for pod "pod-6ed59064-7de9-4526-9b08-c849fb83ccfd" in namespace "emptydir-5858" to be "Succeeded or Failed" May 12 12:49:22.910: INFO: Pod "pod-6ed59064-7de9-4526-9b08-c849fb83ccfd": Phase="Pending", Reason="", readiness=false. Elapsed: 15.553157ms May 12 12:49:24.982: INFO: Pod "pod-6ed59064-7de9-4526-9b08-c849fb83ccfd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087551187s May 12 12:49:26.984: INFO: Pod "pod-6ed59064-7de9-4526-9b08-c849fb83ccfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090105411s STEP: Saw pod success May 12 12:49:26.984: INFO: Pod "pod-6ed59064-7de9-4526-9b08-c849fb83ccfd" satisfied condition "Succeeded or Failed" May 12 12:49:26.986: INFO: Trying to get logs from node kali-worker2 pod pod-6ed59064-7de9-4526-9b08-c849fb83ccfd container test-container: STEP: delete the pod May 12 12:49:27.016: INFO: Waiting for pod pod-6ed59064-7de9-4526-9b08-c849fb83ccfd to disappear May 12 12:49:27.028: INFO: Pod pod-6ed59064-7de9-4526-9b08-c849fb83ccfd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:49:27.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5858" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":92,"skipped":1328,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:49:27.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 12 12:49:27.559: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 12 12:49:27.580: INFO: Pod name sample-pod: Found 0 pods out of 1 May 12 12:49:32.584: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 12 12:49:32.584: INFO: Creating deployment "test-rolling-update-deployment" May 12 12:49:32.588: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 12 12:49:32.613: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 12 12:49:34.620: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 12 12:49:34.622: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724884572, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724884572, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724884572, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724884572, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 12:49:36.626: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 May 12 12:49:36.634: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-804 /apis/apps/v1/namespaces/deployment-804/deployments/test-rolling-update-deployment 9e6ff132-3ef5-4a42-ac02-bc5ffc1946c8 3725360 1 2020-05-12 12:49:32 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-05-12 12:49:32 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-12 12:49:36 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0052bf3a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-12 12:49:32 +0000 UTC,LastTransitionTime:2020-05-12 12:49:32 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-59d5cb45c7" has successfully progressed.,LastUpdateTime:2020-05-12 12:49:36 +0000 UTC,LastTransitionTime:2020-05-12 12:49:32 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 12 12:49:36.637: INFO: New ReplicaSet "test-rolling-update-deployment-59d5cb45c7" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7 deployment-804 /apis/apps/v1/namespaces/deployment-804/replicasets/test-rolling-update-deployment-59d5cb45c7 42b741c9-5f4b-4009-934a-4b0fbd94ce11 3725347 1 2020-05-12 12:49:32 +0000 UTC map[name:sample-pod pod-template-hash:59d5cb45c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 9e6ff132-3ef5-4a42-ac02-bc5ffc1946c8 0xc0052bfa77 0xc0052bfa78}] [] [{kube-controller-manager Update apps/v1 2020-05-12 12:49:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 101 54 102 102 49 51 50 45 51 101 102 53 45 52 97 52 50 45 97 99 48 50 45 98 99 53 102 102 99 49 57 52 54 99 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 59d5cb45c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0052bfb58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 12 12:49:36.637: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 12 12:49:36.637: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-804 /apis/apps/v1/namespaces/deployment-804/replicasets/test-rolling-update-controller 50225bff-c77b-4f68-9e9e-ae1ce7a06043 3725359 2 2020-05-12 12:49:27 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 9e6ff132-3ef5-4a42-ac02-bc5ffc1946c8 0xc0052bf917 0xc0052bf918}] [] [{e2e.test Update apps/v1 2020-05-12 12:49:27 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-12 12:49:36 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 101 54 102 102 49 51 50 45 51 101 102 53 45 52 97 52 50 45 97 99 48 50 45 98 99 53 102 102 99 49 57 52 54 99 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0052bf9e8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 12 12:49:36.640: INFO: Pod "test-rolling-update-deployment-59d5cb45c7-5txxx" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7-5txxx test-rolling-update-deployment-59d5cb45c7- deployment-804 /api/v1/namespaces/deployment-804/pods/test-rolling-update-deployment-59d5cb45c7-5txxx 8dba3b5b-d049-4524-a7c8-a6a6c3c9f172 3725346 0 2020-05-12 12:49:32 +0000 UTC map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-59d5cb45c7 42b741c9-5f4b-4009-934a-4b0fbd94ce11 0xc00534bf37 0xc00534bf38}] [] [{kube-controller-manager Update v1 2020-05-12 12:49:32 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 50 98 55 52 49 99 57 45 53 102 52 98 45 52 48 48 57 45 57 51 52 97 45 52 98 48 102 98 100 57 52 99 101 49 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 12:49:35 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 54 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnwrf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnwrf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnwrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:49:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:49:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:49:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:49:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.65,StartTime:2020-05-12 12:49:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 12:49:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://36f50b9f350b0741698a09dc51ad9685ec44bb0161ca648a5265ae7f15c0ecf0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.65,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:49:36.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-804" for this suite. • [SLOW TEST:9.611 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":93,"skipped":1346,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:49:36.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 12:49:37.575: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 12:49:39.664: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724884577, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724884577, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724884577, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724884577, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 12:49:41.720: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724884577, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724884577, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724884577, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724884577, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 12:49:44.954: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:49:45.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6261" for this suite. STEP: Destroying namespace "webhook-6261-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.168 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":94,"skipped":1365,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:49:45.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 12 12:49:46.140: INFO: The status of Pod test-webserver-2f1a3c1d-cacc-4049-a556-7b82c16cfac8 is Pending, waiting for it to be Running (with Ready = true) May 12 12:49:48.388: INFO: The status of Pod test-webserver-2f1a3c1d-cacc-4049-a556-7b82c16cfac8 is Pending, waiting for it to be Running (with Ready = true) May 12 12:49:50.144: INFO: The status of Pod test-webserver-2f1a3c1d-cacc-4049-a556-7b82c16cfac8 is Pending, waiting for it to be Running (with Ready = true) May 12 12:49:52.143: INFO: The status of Pod test-webserver-2f1a3c1d-cacc-4049-a556-7b82c16cfac8 is Running (Ready = false) May 12 12:49:54.143: INFO: The status of Pod test-webserver-2f1a3c1d-cacc-4049-a556-7b82c16cfac8 is Running (Ready = false) May 12 12:49:56.143: INFO: The status of Pod test-webserver-2f1a3c1d-cacc-4049-a556-7b82c16cfac8 is Running (Ready = false) May 12 12:49:58.941: INFO: The status of Pod test-webserver-2f1a3c1d-cacc-4049-a556-7b82c16cfac8 is Running (Ready = false) May 12 12:50:00.143: INFO: The status of Pod test-webserver-2f1a3c1d-cacc-4049-a556-7b82c16cfac8 is Running (Ready = false) May 12 12:50:02.144: INFO: The status of Pod test-webserver-2f1a3c1d-cacc-4049-a556-7b82c16cfac8 is Running (Ready = false) May 12 12:50:04.190: INFO: The status of Pod test-webserver-2f1a3c1d-cacc-4049-a556-7b82c16cfac8 is Running (Ready = true) May 12 12:50:04.192: INFO: Container started at 2020-05-12 12:49:49 +0000 UTC, pod became ready at 2020-05-12 12:50:04 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:50:04.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9319" for this suite. • [SLOW TEST:18.384 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":95,"skipped":1384,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:50:04.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-e239a021-6b63-4b36-adc8-c29a2fb07210 in namespace container-probe-4622 May 12 12:50:08.346: INFO: Started pod liveness-e239a021-6b63-4b36-adc8-c29a2fb07210 in namespace container-probe-4622 STEP: checking the pod's current state and verifying that restartCount is present May 12 12:50:08.349: INFO: Initial restart count of pod liveness-e239a021-6b63-4b36-adc8-c29a2fb07210 is 0 May 12 12:50:30.507: INFO: Restart count of pod container-probe-4622/liveness-e239a021-6b63-4b36-adc8-c29a2fb07210 is now 1 (22.15819375s elapsed) May 12 12:50:48.541: INFO: Restart count of pod container-probe-4622/liveness-e239a021-6b63-4b36-adc8-c29a2fb07210 is now 2 (40.192310095s elapsed) May 12 12:51:08.880: INFO: Restart count of pod container-probe-4622/liveness-e239a021-6b63-4b36-adc8-c29a2fb07210 is now 3 (1m0.531249577s elapsed) May 12 12:51:31.314: INFO: Restart count of pod container-probe-4622/liveness-e239a021-6b63-4b36-adc8-c29a2fb07210 is now 4 (1m22.964378646s elapsed) May 12 12:52:32.303: INFO: Restart count of pod container-probe-4622/liveness-e239a021-6b63-4b36-adc8-c29a2fb07210 is now 5 (2m23.954259777s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:52:32.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4622" for this suite. • [SLOW TEST:148.239 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":96,"skipped":1422,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:52:32.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 May 12 12:52:32.820: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 12:52:32.841: INFO: Waiting for terminating namespaces to be deleted... May 12 12:52:32.844: INFO: Logging pods the kubelet thinks is on node kali-worker before test May 12 12:52:32.864: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 12 12:52:32.864: INFO: Container kindnet-cni ready: true, restart count 1 May 12 12:52:32.864: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 12 12:52:32.864: INFO: Container kube-proxy ready: true, restart count 0 May 12 12:52:32.864: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test May 12 12:52:32.882: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 12 12:52:32.882: INFO: Container kube-proxy ready: true, restart count 0 May 12 12:52:32.882: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 12 12:52:32.882: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160e48f90893d889], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:52:33.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3394" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":97,"skipped":1428,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:52:33.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206 STEP: creating the pod May 12 12:52:33.982: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7327' May 12 12:52:34.324: INFO: stderr: "" May 12 12:52:34.324: INFO: stdout: "pod/pause created\n" May 12 12:52:34.324: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 12 12:52:34.324: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7327" to be "running and ready" May 12 12:52:34.404: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 79.310359ms May 12 12:52:36.407: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082612236s May 12 12:52:38.411: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.0865382s May 12 12:52:38.411: INFO: Pod "pause" satisfied condition "running and ready" May 12 12:52:38.411: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: adding the label testing-label with value testing-label-value to a pod May 12 12:52:38.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7327' May 12 12:52:38.504: INFO: stderr: "" May 12 12:52:38.504: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 12 12:52:38.504: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7327' May 12 12:52:38.593: INFO: stderr: "" May 12 12:52:38.593: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 12 12:52:38.593: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7327' May 12 12:52:38.688: INFO: stderr: "" May 12 12:52:38.688: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 12 12:52:38.688: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7327' May 12 12:52:38.785: INFO: stderr: "" May 12 12:52:38.785: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213 STEP: using delete to clean up resources May 12 12:52:38.785: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7327' May 12 12:52:38.963: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 12:52:38.963: INFO: stdout: "pod \"pause\" force deleted\n" May 12 12:52:38.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7327' May 12 12:52:39.396: INFO: stderr: "No resources found in kubectl-7327 namespace.\n" May 12 12:52:39.396: INFO: stdout: "" May 12 12:52:39.396: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7327 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 12 12:52:39.513: INFO: stderr: "" May 12 12:52:39.513: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:52:39.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7327" for this suite. • [SLOW TEST:5.596 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":275,"completed":98,"skipped":1429,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:52:39.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-273 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-273 I0512 12:52:40.611069 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-273, replica count: 2 I0512 12:52:43.661630 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 12:52:46.661866 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 12 12:52:46.661: INFO: Creating new exec pod May 12 12:52:51.700: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-273 execpodjnqsw -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 12 12:52:51.890: INFO: stderr: "I0512 12:52:51.823960 1444 log.go:172] (0xc000a74210) (0xc000858280) Create stream\nI0512 12:52:51.824030 1444 log.go:172] (0xc000a74210) (0xc000858280) Stream added, broadcasting: 1\nI0512 12:52:51.826337 1444 log.go:172] (0xc000a74210) Reply frame received for 1\nI0512 12:52:51.826361 1444 log.go:172] (0xc000a74210) (0xc000858320) Create stream\nI0512 12:52:51.826367 1444 log.go:172] (0xc000a74210) (0xc000858320) Stream added, broadcasting: 3\nI0512 12:52:51.827073 1444 log.go:172] (0xc000a74210) Reply frame received for 3\nI0512 12:52:51.827114 1444 log.go:172] (0xc000a74210) (0xc0005ff360) Create stream\nI0512 12:52:51.827133 1444 log.go:172] (0xc000a74210) (0xc0005ff360) Stream added, broadcasting: 5\nI0512 12:52:51.827845 1444 log.go:172] (0xc000a74210) Reply frame received for 5\nI0512 12:52:51.880698 1444 log.go:172] (0xc000a74210) Data frame received for 5\nI0512 12:52:51.880730 1444 log.go:172] (0xc0005ff360) (5) Data frame handling\nI0512 12:52:51.880759 1444 log.go:172] (0xc0005ff360) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0512 12:52:51.880901 1444 log.go:172] (0xc000a74210) Data frame received for 5\nI0512 12:52:51.880915 1444 log.go:172] (0xc0005ff360) (5) Data frame handling\nI0512 12:52:51.880924 1444 log.go:172] (0xc0005ff360) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0512 12:52:51.881382 1444 log.go:172] (0xc000a74210) Data frame received for 5\nI0512 12:52:51.881403 1444 log.go:172] (0xc0005ff360) (5) Data frame handling\nI0512 12:52:51.881568 1444 log.go:172] (0xc000a74210) Data frame received for 3\nI0512 12:52:51.881577 1444 log.go:172] (0xc000858320) (3) Data frame handling\nI0512 12:52:51.886688 1444 log.go:172] (0xc000a74210) Data frame received for 1\nI0512 12:52:51.886705 1444 log.go:172] (0xc000858280) (1) Data frame handling\nI0512 12:52:51.886715 1444 log.go:172] (0xc000858280) (1) Data frame sent\nI0512 12:52:51.886938 1444 log.go:172] (0xc000a74210) (0xc000858280) Stream removed, broadcasting: 1\nI0512 12:52:51.887505 1444 log.go:172] (0xc000a74210) (0xc000858280) Stream removed, broadcasting: 1\nI0512 12:52:51.887529 1444 log.go:172] (0xc000a74210) (0xc000858320) Stream removed, broadcasting: 3\nI0512 12:52:51.887541 1444 log.go:172] (0xc000a74210) (0xc0005ff360) Stream removed, broadcasting: 5\n" May 12 12:52:51.891: INFO: stdout: "" May 12 12:52:51.891: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-273 execpodjnqsw -- /bin/sh -x -c nc -zv -t -w 2 10.97.186.33 80' May 12 12:52:52.058: INFO: stderr: "I0512 12:52:51.991402 1463 log.go:172] (0xc000b2a0b0) (0xc00052ab40) Create stream\nI0512 12:52:51.991446 1463 log.go:172] (0xc000b2a0b0) (0xc00052ab40) Stream added, broadcasting: 1\nI0512 12:52:51.994405 1463 log.go:172] (0xc000b2a0b0) Reply frame received for 1\nI0512 12:52:51.994487 1463 log.go:172] (0xc000b2a0b0) (0xc000b74000) Create stream\nI0512 12:52:51.994518 1463 log.go:172] (0xc000b2a0b0) (0xc000b74000) Stream added, broadcasting: 3\nI0512 12:52:51.995383 1463 log.go:172] (0xc000b2a0b0) Reply frame received for 3\nI0512 12:52:51.995437 1463 log.go:172] (0xc000b2a0b0) (0xc0007c12c0) Create stream\nI0512 12:52:51.995468 1463 log.go:172] (0xc000b2a0b0) (0xc0007c12c0) Stream added, broadcasting: 5\nI0512 12:52:51.996259 1463 log.go:172] (0xc000b2a0b0) Reply frame received for 5\nI0512 12:52:52.053299 1463 log.go:172] (0xc000b2a0b0) Data frame received for 3\nI0512 12:52:52.053348 1463 log.go:172] (0xc000b74000) (3) Data frame handling\nI0512 12:52:52.053389 1463 log.go:172] (0xc000b2a0b0) Data frame received for 5\nI0512 12:52:52.053416 1463 log.go:172] (0xc0007c12c0) (5) Data frame handling\nI0512 12:52:52.053433 1463 log.go:172] (0xc0007c12c0) (5) Data frame sent\nI0512 12:52:52.053445 1463 log.go:172] (0xc000b2a0b0) Data frame received for 5\nI0512 12:52:52.053455 1463 log.go:172] (0xc0007c12c0) (5) Data frame handling\n+ nc -zv -t -w 2 10.97.186.33 80\nConnection to 10.97.186.33 80 port [tcp/http] succeeded!\nI0512 12:52:52.054399 1463 log.go:172] (0xc000b2a0b0) Data frame received for 1\nI0512 12:52:52.054439 1463 log.go:172] (0xc00052ab40) (1) Data frame handling\nI0512 12:52:52.054463 1463 log.go:172] (0xc00052ab40) (1) Data frame sent\nI0512 12:52:52.054489 1463 log.go:172] (0xc000b2a0b0) (0xc00052ab40) Stream removed, broadcasting: 1\nI0512 12:52:52.054521 1463 log.go:172] (0xc000b2a0b0) Go away received\nI0512 12:52:52.054923 1463 log.go:172] (0xc000b2a0b0) (0xc00052ab40) Stream removed, broadcasting: 1\nI0512 12:52:52.054948 1463 log.go:172] (0xc000b2a0b0) (0xc000b74000) Stream removed, broadcasting: 3\nI0512 12:52:52.054963 1463 log.go:172] (0xc000b2a0b0) (0xc0007c12c0) Stream removed, broadcasting: 5\n" May 12 12:52:52.058: INFO: stdout: "" May 12 12:52:52.058: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:52:52.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-273" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.831 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":99,"skipped":1437,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:52:52.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 12 12:52:52.819: INFO: Creating ReplicaSet my-hostname-basic-97967bb2-bf97-4eff-8e33-81e84573edc0 May 12 12:52:52.868: INFO: Pod name my-hostname-basic-97967bb2-bf97-4eff-8e33-81e84573edc0: Found 0 pods out of 1 May 12 12:52:57.887: INFO: Pod name my-hostname-basic-97967bb2-bf97-4eff-8e33-81e84573edc0: Found 1 pods out of 1 May 12 12:52:57.887: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-97967bb2-bf97-4eff-8e33-81e84573edc0" is running May 12 12:52:57.890: INFO: Pod "my-hostname-basic-97967bb2-bf97-4eff-8e33-81e84573edc0-5gltb" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 12:52:53 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 12:52:56 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 12:52:56 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 12:52:52 +0000 UTC Reason: Message:}]) May 12 12:52:57.890: INFO: Trying to dial the pod May 12 12:53:02.907: INFO: Controller my-hostname-basic-97967bb2-bf97-4eff-8e33-81e84573edc0: Got expected result from replica 1 [my-hostname-basic-97967bb2-bf97-4eff-8e33-81e84573edc0-5gltb]: "my-hostname-basic-97967bb2-bf97-4eff-8e33-81e84573edc0-5gltb", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:53:02.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8005" for this suite. • [SLOW TEST:10.551 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":100,"skipped":1446,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:53:02.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 12 12:53:03.021: INFO: Pod name pod-release: Found 0 pods out of 1 May 12 12:53:08.025: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:53:08.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-418" for this suite. • [SLOW TEST:5.278 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":101,"skipped":1493,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:53:08.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-0a1e7acd-a36f-40de-9540-0cbfa59ad349 STEP: Creating a pod to test consume configMaps May 12 12:53:08.356: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fcefc505-ca5e-47d9-bc31-40829c867a19" in namespace "projected-1052" to be "Succeeded or Failed" May 12 12:53:08.360: INFO: Pod "pod-projected-configmaps-fcefc505-ca5e-47d9-bc31-40829c867a19": Phase="Pending", Reason="", readiness=false. Elapsed: 3.323137ms May 12 12:53:10.532: INFO: Pod "pod-projected-configmaps-fcefc505-ca5e-47d9-bc31-40829c867a19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175366167s May 12 12:53:12.536: INFO: Pod "pod-projected-configmaps-fcefc505-ca5e-47d9-bc31-40829c867a19": Phase="Pending", Reason="", readiness=false. Elapsed: 4.179913127s May 12 12:53:14.540: INFO: Pod "pod-projected-configmaps-fcefc505-ca5e-47d9-bc31-40829c867a19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.183808557s STEP: Saw pod success May 12 12:53:14.540: INFO: Pod "pod-projected-configmaps-fcefc505-ca5e-47d9-bc31-40829c867a19" satisfied condition "Succeeded or Failed" May 12 12:53:14.543: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-fcefc505-ca5e-47d9-bc31-40829c867a19 container projected-configmap-volume-test: STEP: delete the pod May 12 12:53:14.645: INFO: Waiting for pod pod-projected-configmaps-fcefc505-ca5e-47d9-bc31-40829c867a19 to disappear May 12 12:53:14.811: INFO: Pod pod-projected-configmaps-fcefc505-ca5e-47d9-bc31-40829c867a19 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:53:14.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1052" for this suite. • [SLOW TEST:6.642 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":102,"skipped":1509,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:53:14.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 12 12:53:14.984: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config version' May 12 12:53:15.352: INFO: stderr: "" May 12 12:53:15.352: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:24:20Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:53:15.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7507" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":103,"skipped":1569,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:53:15.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 12 12:53:15.711: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:53:22.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4973" for this suite. • [SLOW TEST:6.991 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":275,"completed":104,"skipped":1586,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:53:22.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 May 12 12:53:22.486: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 12:53:22.506: INFO: Waiting for terminating namespaces to be deleted... May 12 12:53:22.508: INFO: Logging pods the kubelet thinks is on node kali-worker before test May 12 12:53:22.511: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 12 12:53:22.511: INFO: Container kindnet-cni ready: true, restart count 1 May 12 12:53:22.511: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 12 12:53:22.511: INFO: Container kube-proxy ready: true, restart count 0 May 12 12:53:22.511: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test May 12 12:53:22.515: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 12 12:53:22.515: INFO: Container kindnet-cni ready: true, restart count 0 May 12 12:53:22.515: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 12 12:53:22.515: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: verifying the node has the label node kali-worker STEP: verifying the node has the label node kali-worker2 May 12 12:53:22.635: INFO: Pod kindnet-f8plf requesting resource cpu=100m on Node kali-worker May 12 12:53:22.635: INFO: Pod kindnet-mcdh2 requesting resource cpu=100m on Node kali-worker2 May 12 12:53:22.635: INFO: Pod kube-proxy-mmnb6 requesting resource cpu=0m on Node kali-worker2 May 12 12:53:22.635: INFO: Pod kube-proxy-vrswj requesting resource cpu=0m on Node kali-worker STEP: Starting Pods to consume most of the cluster CPU. May 12 12:53:22.635: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker May 12 12:53:22.641: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-65e4086f-1b19-4d40-8cf0-8d9da5db64f9.160e49049d223719], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8679/filler-pod-65e4086f-1b19-4d40-8cf0-8d9da5db64f9 to kali-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-65e4086f-1b19-4d40-8cf0-8d9da5db64f9.160e49054d811ccf], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-65e4086f-1b19-4d40-8cf0-8d9da5db64f9.160e4905a2683686], Reason = [Created], Message = [Created container filler-pod-65e4086f-1b19-4d40-8cf0-8d9da5db64f9] STEP: Considering event: Type = [Normal], Name = [filler-pod-65e4086f-1b19-4d40-8cf0-8d9da5db64f9.160e4905b15fc59c], Reason = [Started], Message = [Started container filler-pod-65e4086f-1b19-4d40-8cf0-8d9da5db64f9] STEP: Considering event: Type = [Normal], Name = [filler-pod-e0f854d6-1d5c-4a65-b336-754cab90de5d.160e49049ba32d38], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8679/filler-pod-e0f854d6-1d5c-4a65-b336-754cab90de5d to kali-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-e0f854d6-1d5c-4a65-b336-754cab90de5d.160e4904ea088a6c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e0f854d6-1d5c-4a65-b336-754cab90de5d.160e49057a50576f], Reason = [Created], Message = [Created container filler-pod-e0f854d6-1d5c-4a65-b336-754cab90de5d] STEP: Considering event: Type = [Normal], Name = [filler-pod-e0f854d6-1d5c-4a65-b336-754cab90de5d.160e49059b26f1cf], Reason = [Started], Message = [Started container filler-pod-e0f854d6-1d5c-4a65-b336-754cab90de5d] STEP: Considering event: Type = [Warning], Name = [additional-pod.160e49060ee57276], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node kali-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node kali-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:53:29.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8679" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:7.558 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":275,"completed":105,"skipped":1607,"failed":0} [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:53:29.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-3457, will wait for the garbage collector to delete the pods May 12 12:53:36.351: INFO: Deleting Job.batch foo took: 42.444364ms May 12 12:53:52.452: INFO: Terminating Job.batch foo pods took: 16.100266345s STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:54:44.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3457" for this suite. • [SLOW TEST:74.730 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":106,"skipped":1607,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:54:44.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating api versions May 12 12:54:45.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config api-versions' May 12 12:54:45.682: INFO: stderr: "" May 12 12:54:45.682: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:54:45.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3548" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":275,"completed":107,"skipped":1634,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:54:45.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-3324 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-3324 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3324 May 12 12:54:45.961: INFO: Found 0 stateful pods, waiting for 1 May 12 12:54:56.848: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 12 12:54:56.851: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3324 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 12 12:54:57.940: INFO: stderr: "I0512 12:54:57.665309 1520 log.go:172] (0xc000bc7ce0) (0xc000a91900) Create stream\nI0512 12:54:57.665381 1520 log.go:172] (0xc000bc7ce0) (0xc000a91900) Stream added, broadcasting: 1\nI0512 12:54:57.667348 1520 log.go:172] (0xc000bc7ce0) Reply frame received for 1\nI0512 12:54:57.667386 1520 log.go:172] (0xc000bc7ce0) (0xc000b6e0a0) Create stream\nI0512 12:54:57.667398 1520 log.go:172] (0xc000bc7ce0) (0xc000b6e0a0) Stream added, broadcasting: 3\nI0512 12:54:57.668257 1520 log.go:172] (0xc000bc7ce0) Reply frame received for 3\nI0512 12:54:57.668284 1520 log.go:172] (0xc000bc7ce0) (0xc000b6e140) Create stream\nI0512 12:54:57.668308 1520 log.go:172] (0xc000bc7ce0) (0xc000b6e140) Stream added, broadcasting: 5\nI0512 12:54:57.669271 1520 log.go:172] (0xc000bc7ce0) Reply frame received for 5\nI0512 12:54:57.741476 1520 log.go:172] (0xc000bc7ce0) Data frame received for 5\nI0512 12:54:57.741501 1520 log.go:172] (0xc000b6e140) (5) Data frame handling\nI0512 12:54:57.741516 1520 log.go:172] (0xc000b6e140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 12:54:57.932076 1520 log.go:172] (0xc000bc7ce0) Data frame received for 3\nI0512 12:54:57.932203 1520 log.go:172] (0xc000b6e0a0) (3) Data frame handling\nI0512 12:54:57.932275 1520 log.go:172] (0xc000b6e0a0) (3) Data frame sent\nI0512 12:54:57.932332 1520 log.go:172] (0xc000bc7ce0) Data frame received for 3\nI0512 12:54:57.932424 1520 log.go:172] (0xc000b6e0a0) (3) Data frame handling\nI0512 12:54:57.932487 1520 log.go:172] (0xc000bc7ce0) Data frame received for 5\nI0512 12:54:57.932563 1520 log.go:172] (0xc000b6e140) (5) Data frame handling\nI0512 12:54:57.934221 1520 log.go:172] (0xc000bc7ce0) Data frame received for 1\nI0512 12:54:57.934313 1520 log.go:172] (0xc000a91900) (1) Data frame handling\nI0512 12:54:57.934343 1520 log.go:172] (0xc000a91900) (1) Data frame sent\nI0512 12:54:57.934396 1520 log.go:172] (0xc000bc7ce0) (0xc000a91900) Stream removed, broadcasting: 1\nI0512 12:54:57.934427 1520 log.go:172] (0xc000bc7ce0) Go away received\nI0512 12:54:57.934871 1520 log.go:172] (0xc000bc7ce0) (0xc000a91900) Stream removed, broadcasting: 1\nI0512 12:54:57.934890 1520 log.go:172] (0xc000bc7ce0) (0xc000b6e0a0) Stream removed, broadcasting: 3\nI0512 12:54:57.934899 1520 log.go:172] (0xc000bc7ce0) (0xc000b6e140) Stream removed, broadcasting: 5\n" May 12 12:54:57.940: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 12 12:54:57.940: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 12 12:54:58.022: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 12 12:55:08.026: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 12 12:55:08.026: INFO: Waiting for statefulset status.replicas updated to 0 May 12 12:55:08.053: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999334s May 12 12:55:09.087: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.993895268s May 12 12:55:10.091: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.959352648s May 12 12:55:11.095: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.955302925s May 12 12:55:12.098: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.952008602s May 12 12:55:13.279: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.948137591s May 12 12:55:14.283: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.767913061s May 12 12:55:15.620: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.763795036s May 12 12:55:16.625: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.426743284s May 12 12:55:17.751: INFO: Verifying statefulset ss doesn't scale past 1 for another 421.743299ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3324 May 12 12:55:18.755: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3324 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 12:55:19.050: INFO: stderr: "I0512 12:55:18.869512 1540 log.go:172] (0xc000afb290) (0xc000c98500) Create stream\nI0512 12:55:18.869547 1540 log.go:172] (0xc000afb290) (0xc000c98500) Stream added, broadcasting: 1\nI0512 12:55:18.871271 1540 log.go:172] (0xc000afb290) Reply frame received for 1\nI0512 12:55:18.871326 1540 log.go:172] (0xc000afb290) (0xc0006c55e0) Create stream\nI0512 12:55:18.871354 1540 log.go:172] (0xc000afb290) (0xc0006c55e0) Stream added, broadcasting: 3\nI0512 12:55:18.872064 1540 log.go:172] (0xc000afb290) Reply frame received for 3\nI0512 12:55:18.872083 1540 log.go:172] (0xc000afb290) (0xc0009c21e0) Create stream\nI0512 12:55:18.872093 1540 log.go:172] (0xc000afb290) (0xc0009c21e0) Stream added, broadcasting: 5\nI0512 12:55:18.872772 1540 log.go:172] (0xc000afb290) Reply frame received for 5\nI0512 12:55:18.923133 1540 log.go:172] (0xc000afb290) Data frame received for 5\nI0512 12:55:18.923166 1540 log.go:172] (0xc0009c21e0) (5) Data frame handling\nI0512 12:55:18.923184 1540 log.go:172] (0xc0009c21e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0512 12:55:19.043801 1540 log.go:172] (0xc000afb290) Data frame received for 3\nI0512 12:55:19.043815 1540 log.go:172] (0xc0006c55e0) (3) Data frame handling\nI0512 12:55:19.043821 1540 log.go:172] (0xc0006c55e0) (3) Data frame sent\nI0512 12:55:19.044146 1540 log.go:172] (0xc000afb290) Data frame received for 5\nI0512 12:55:19.044158 1540 log.go:172] (0xc0009c21e0) (5) Data frame handling\nI0512 12:55:19.044178 1540 log.go:172] (0xc000afb290) Data frame received for 3\nI0512 12:55:19.044193 1540 log.go:172] (0xc0006c55e0) (3) Data frame handling\nI0512 12:55:19.045828 1540 log.go:172] (0xc000afb290) Data frame received for 1\nI0512 12:55:19.045845 1540 log.go:172] (0xc000c98500) (1) Data frame handling\nI0512 12:55:19.045856 1540 log.go:172] (0xc000c98500) (1) Data frame sent\nI0512 12:55:19.045867 1540 log.go:172] (0xc000afb290) (0xc000c98500) Stream removed, broadcasting: 1\nI0512 12:55:19.045884 1540 log.go:172] (0xc000afb290) Go away received\nI0512 12:55:19.046207 1540 log.go:172] (0xc000afb290) (0xc000c98500) Stream removed, broadcasting: 1\nI0512 12:55:19.046237 1540 log.go:172] (0xc000afb290) (0xc0006c55e0) Stream removed, broadcasting: 3\nI0512 12:55:19.046259 1540 log.go:172] (0xc000afb290) (0xc0009c21e0) Stream removed, broadcasting: 5\n" May 12 12:55:19.050: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 12 12:55:19.050: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 12 12:55:19.094: INFO: Found 1 stateful pods, waiting for 3 May 12 12:55:29.098: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 12 12:55:29.098: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 12 12:55:29.098: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 12 12:55:29.104: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3324 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 12 12:55:29.327: INFO: stderr: "I0512 12:55:29.226814 1559 log.go:172] (0xc00003be40) (0xc0005d34a0) Create stream\nI0512 12:55:29.226868 1559 log.go:172] (0xc00003be40) (0xc0005d34a0) Stream added, broadcasting: 1\nI0512 12:55:29.229022 1559 log.go:172] (0xc00003be40) Reply frame received for 1\nI0512 12:55:29.229315 1559 log.go:172] (0xc00003be40) (0xc0004268c0) Create stream\nI0512 12:55:29.229348 1559 log.go:172] (0xc00003be40) (0xc0004268c0) Stream added, broadcasting: 3\nI0512 12:55:29.230168 1559 log.go:172] (0xc00003be40) Reply frame received for 3\nI0512 12:55:29.230202 1559 log.go:172] (0xc00003be40) (0xc000a3e000) Create stream\nI0512 12:55:29.230215 1559 log.go:172] (0xc00003be40) (0xc000a3e000) Stream added, broadcasting: 5\nI0512 12:55:29.231032 1559 log.go:172] (0xc00003be40) Reply frame received for 5\nI0512 12:55:29.321501 1559 log.go:172] (0xc00003be40) Data frame received for 5\nI0512 12:55:29.321528 1559 log.go:172] (0xc000a3e000) (5) Data frame handling\nI0512 12:55:29.321536 1559 log.go:172] (0xc000a3e000) (5) Data frame sent\nI0512 12:55:29.321541 1559 log.go:172] (0xc00003be40) Data frame received for 5\nI0512 12:55:29.321546 1559 log.go:172] (0xc000a3e000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 12:55:29.321573 1559 log.go:172] (0xc00003be40) Data frame received for 3\nI0512 12:55:29.321581 1559 log.go:172] (0xc0004268c0) (3) Data frame handling\nI0512 12:55:29.321587 1559 log.go:172] (0xc0004268c0) (3) Data frame sent\nI0512 12:55:29.321592 1559 log.go:172] (0xc00003be40) Data frame received for 3\nI0512 12:55:29.321603 1559 log.go:172] (0xc0004268c0) (3) Data frame handling\nI0512 12:55:29.322587 1559 log.go:172] (0xc00003be40) Data frame received for 1\nI0512 12:55:29.322605 1559 log.go:172] (0xc0005d34a0) (1) Data frame handling\nI0512 12:55:29.322615 1559 log.go:172] (0xc0005d34a0) (1) Data frame sent\nI0512 12:55:29.322635 1559 log.go:172] (0xc00003be40) (0xc0005d34a0) Stream removed, broadcasting: 1\nI0512 12:55:29.322662 1559 log.go:172] (0xc00003be40) Go away received\nI0512 12:55:29.322977 1559 log.go:172] (0xc00003be40) (0xc0005d34a0) Stream removed, broadcasting: 1\nI0512 12:55:29.323000 1559 log.go:172] (0xc00003be40) (0xc0004268c0) Stream removed, broadcasting: 3\nI0512 12:55:29.323010 1559 log.go:172] (0xc00003be40) (0xc000a3e000) Stream removed, broadcasting: 5\n" May 12 12:55:29.327: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 12 12:55:29.327: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 12 12:55:29.327: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3324 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 12 12:55:29.602: INFO: stderr: "I0512 12:55:29.462720 1581 log.go:172] (0xc00003afd0) (0xc0008b81e0) Create stream\nI0512 12:55:29.462786 1581 log.go:172] (0xc00003afd0) (0xc0008b81e0) Stream added, broadcasting: 1\nI0512 12:55:29.465729 1581 log.go:172] (0xc00003afd0) Reply frame received for 1\nI0512 12:55:29.465776 1581 log.go:172] (0xc00003afd0) (0xc000667540) Create stream\nI0512 12:55:29.465791 1581 log.go:172] (0xc00003afd0) (0xc000667540) Stream added, broadcasting: 3\nI0512 12:55:29.466737 1581 log.go:172] (0xc00003afd0) Reply frame received for 3\nI0512 12:55:29.466791 1581 log.go:172] (0xc00003afd0) (0xc0003d2000) Create stream\nI0512 12:55:29.466805 1581 log.go:172] (0xc00003afd0) (0xc0003d2000) Stream added, broadcasting: 5\nI0512 12:55:29.467664 1581 log.go:172] (0xc00003afd0) Reply frame received for 5\nI0512 12:55:29.541577 1581 log.go:172] (0xc00003afd0) Data frame received for 5\nI0512 12:55:29.541609 1581 log.go:172] (0xc0003d2000) (5) Data frame handling\nI0512 12:55:29.541637 1581 log.go:172] (0xc0003d2000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 12:55:29.593659 1581 log.go:172] (0xc00003afd0) Data frame received for 3\nI0512 12:55:29.593837 1581 log.go:172] (0xc000667540) (3) Data frame handling\nI0512 12:55:29.593857 1581 log.go:172] (0xc000667540) (3) Data frame sent\nI0512 12:55:29.593866 1581 log.go:172] (0xc00003afd0) Data frame received for 3\nI0512 12:55:29.593873 1581 log.go:172] (0xc000667540) (3) Data frame handling\nI0512 12:55:29.593923 1581 log.go:172] (0xc00003afd0) Data frame received for 5\nI0512 12:55:29.593961 1581 log.go:172] (0xc0003d2000) (5) Data frame handling\nI0512 12:55:29.596118 1581 log.go:172] (0xc00003afd0) Data frame received for 1\nI0512 12:55:29.596205 1581 log.go:172] (0xc0008b81e0) (1) Data frame handling\nI0512 12:55:29.596233 1581 log.go:172] (0xc0008b81e0) (1) Data frame sent\nI0512 12:55:29.596282 1581 log.go:172] (0xc00003afd0) (0xc0008b81e0) Stream removed, broadcasting: 1\nI0512 12:55:29.596316 1581 log.go:172] (0xc00003afd0) Go away received\nI0512 12:55:29.596831 1581 log.go:172] (0xc00003afd0) (0xc0008b81e0) Stream removed, broadcasting: 1\nI0512 12:55:29.596869 1581 log.go:172] (0xc00003afd0) (0xc000667540) Stream removed, broadcasting: 3\nI0512 12:55:29.596883 1581 log.go:172] (0xc00003afd0) (0xc0003d2000) Stream removed, broadcasting: 5\n" May 12 12:55:29.602: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 12 12:55:29.602: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 12 12:55:29.602: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3324 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 12 12:55:29.854: INFO: stderr: "I0512 12:55:29.734554 1604 log.go:172] (0xc00003a370) (0xc0006b5680) Create stream\nI0512 12:55:29.734609 1604 log.go:172] (0xc00003a370) (0xc0006b5680) Stream added, broadcasting: 1\nI0512 12:55:29.737023 1604 log.go:172] (0xc00003a370) Reply frame received for 1\nI0512 12:55:29.737072 1604 log.go:172] (0xc00003a370) (0xc0006b5720) Create stream\nI0512 12:55:29.737089 1604 log.go:172] (0xc00003a370) (0xc0006b5720) Stream added, broadcasting: 3\nI0512 12:55:29.738228 1604 log.go:172] (0xc00003a370) Reply frame received for 3\nI0512 12:55:29.738268 1604 log.go:172] (0xc00003a370) (0xc0006b57c0) Create stream\nI0512 12:55:29.738279 1604 log.go:172] (0xc00003a370) (0xc0006b57c0) Stream added, broadcasting: 5\nI0512 12:55:29.739241 1604 log.go:172] (0xc00003a370) Reply frame received for 5\nI0512 12:55:29.798430 1604 log.go:172] (0xc00003a370) Data frame received for 5\nI0512 12:55:29.798460 1604 log.go:172] (0xc0006b57c0) (5) Data frame handling\nI0512 12:55:29.798479 1604 log.go:172] (0xc0006b57c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 12:55:29.845706 1604 log.go:172] (0xc00003a370) Data frame received for 3\nI0512 12:55:29.845756 1604 log.go:172] (0xc0006b5720) (3) Data frame handling\nI0512 12:55:29.845793 1604 log.go:172] (0xc0006b5720) (3) Data frame sent\nI0512 12:55:29.845923 1604 log.go:172] (0xc00003a370) Data frame received for 3\nI0512 12:55:29.845956 1604 log.go:172] (0xc0006b5720) (3) Data frame handling\nI0512 12:55:29.845978 1604 log.go:172] (0xc00003a370) Data frame received for 5\nI0512 12:55:29.845987 1604 log.go:172] (0xc0006b57c0) (5) Data frame handling\nI0512 12:55:29.848269 1604 log.go:172] (0xc00003a370) Data frame received for 1\nI0512 12:55:29.848307 1604 log.go:172] (0xc0006b5680) (1) Data frame handling\nI0512 12:55:29.848330 1604 log.go:172] (0xc0006b5680) (1) Data frame sent\nI0512 12:55:29.848357 1604 log.go:172] (0xc00003a370) (0xc0006b5680) Stream removed, broadcasting: 1\nI0512 12:55:29.848422 1604 log.go:172] (0xc00003a370) Go away received\nI0512 12:55:29.848894 1604 log.go:172] (0xc00003a370) (0xc0006b5680) Stream removed, broadcasting: 1\nI0512 12:55:29.848936 1604 log.go:172] (0xc00003a370) (0xc0006b5720) Stream removed, broadcasting: 3\nI0512 12:55:29.848951 1604 log.go:172] (0xc00003a370) (0xc0006b57c0) Stream removed, broadcasting: 5\n" May 12 12:55:29.854: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 12 12:55:29.854: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 12 12:55:29.854: INFO: Waiting for statefulset status.replicas updated to 0 May 12 12:55:29.857: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 12 12:55:39.863: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 12 12:55:39.863: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 12 12:55:39.863: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 12 12:55:39.873: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999676s May 12 12:55:40.879: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995563377s May 12 12:55:41.884: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989222478s May 12 12:55:42.888: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.984035168s May 12 12:55:43.892: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.980117141s May 12 12:55:44.895: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.976475789s May 12 12:55:45.899: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.972931895s May 12 12:55:46.933: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.968902539s May 12 12:55:47.939: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.935191591s May 12 12:55:48.943: INFO: Verifying statefulset ss doesn't scale past 3 for another 929.818587ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3324 May 12 12:55:49.948: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3324 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 12:55:50.155: INFO: stderr: "I0512 12:55:50.071411 1625 log.go:172] (0xc0000e71e0) (0xc00059f720) Create stream\nI0512 12:55:50.071492 1625 log.go:172] (0xc0000e71e0) (0xc00059f720) Stream added, broadcasting: 1\nI0512 12:55:50.074449 1625 log.go:172] (0xc0000e71e0) Reply frame received for 1\nI0512 12:55:50.074482 1625 log.go:172] (0xc0000e71e0) (0xc0008d0000) Create stream\nI0512 12:55:50.074494 1625 log.go:172] (0xc0000e71e0) (0xc0008d0000) Stream added, broadcasting: 3\nI0512 12:55:50.075197 1625 log.go:172] (0xc0000e71e0) Reply frame received for 3\nI0512 12:55:50.075227 1625 log.go:172] (0xc0000e71e0) (0xc00059f7c0) Create stream\nI0512 12:55:50.075238 1625 log.go:172] (0xc0000e71e0) (0xc00059f7c0) Stream added, broadcasting: 5\nI0512 12:55:50.075882 1625 log.go:172] (0xc0000e71e0) Reply frame received for 5\nI0512 12:55:50.149437 1625 log.go:172] (0xc0000e71e0) Data frame received for 5\nI0512 12:55:50.149469 1625 log.go:172] (0xc00059f7c0) (5) Data frame handling\nI0512 12:55:50.149482 1625 log.go:172] (0xc00059f7c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0512 12:55:50.149675 1625 log.go:172] (0xc0000e71e0) Data frame received for 3\nI0512 12:55:50.149701 1625 log.go:172] (0xc0008d0000) (3) Data frame handling\nI0512 12:55:50.149718 1625 log.go:172] (0xc0008d0000) (3) Data frame sent\nI0512 12:55:50.150049 1625 log.go:172] (0xc0000e71e0) Data frame received for 5\nI0512 12:55:50.150075 1625 log.go:172] (0xc00059f7c0) (5) Data frame handling\nI0512 12:55:50.150267 1625 log.go:172] (0xc0000e71e0) Data frame received for 3\nI0512 12:55:50.150287 1625 log.go:172] (0xc0008d0000) (3) Data frame handling\nI0512 12:55:50.151610 1625 log.go:172] (0xc0000e71e0) Data frame received for 1\nI0512 12:55:50.151625 1625 log.go:172] (0xc00059f720) (1) Data frame handling\nI0512 12:55:50.151635 1625 log.go:172] (0xc00059f720) (1) Data frame sent\nI0512 12:55:50.151776 1625 log.go:172] (0xc0000e71e0) (0xc00059f720) Stream removed, broadcasting: 1\nI0512 12:55:50.151822 1625 log.go:172] (0xc0000e71e0) Go away received\nI0512 12:55:50.152034 1625 log.go:172] (0xc0000e71e0) (0xc00059f720) Stream removed, broadcasting: 1\nI0512 12:55:50.152043 1625 log.go:172] (0xc0000e71e0) (0xc0008d0000) Stream removed, broadcasting: 3\nI0512 12:55:50.152048 1625 log.go:172] (0xc0000e71e0) (0xc00059f7c0) Stream removed, broadcasting: 5\n" May 12 12:55:50.156: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 12 12:55:50.156: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 12 12:55:50.156: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3324 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 12:55:50.369: INFO: stderr: "I0512 12:55:50.279813 1646 log.go:172] (0xc000898c60) (0xc00091c640) Create stream\nI0512 12:55:50.279858 1646 log.go:172] (0xc000898c60) (0xc00091c640) Stream added, broadcasting: 1\nI0512 12:55:50.283538 1646 log.go:172] (0xc000898c60) Reply frame received for 1\nI0512 12:55:50.283579 1646 log.go:172] (0xc000898c60) (0xc0007cd680) Create stream\nI0512 12:55:50.283592 1646 log.go:172] (0xc000898c60) (0xc0007cd680) Stream added, broadcasting: 3\nI0512 12:55:50.284294 1646 log.go:172] (0xc000898c60) Reply frame received for 3\nI0512 12:55:50.284324 1646 log.go:172] (0xc000898c60) (0xc000576aa0) Create stream\nI0512 12:55:50.284340 1646 log.go:172] (0xc000898c60) (0xc000576aa0) Stream added, broadcasting: 5\nI0512 12:55:50.285308 1646 log.go:172] (0xc000898c60) Reply frame received for 5\nI0512 12:55:50.363311 1646 log.go:172] (0xc000898c60) Data frame received for 3\nI0512 12:55:50.363348 1646 log.go:172] (0xc0007cd680) (3) Data frame handling\nI0512 12:55:50.363374 1646 log.go:172] (0xc0007cd680) (3) Data frame sent\nI0512 12:55:50.363490 1646 log.go:172] (0xc000898c60) Data frame received for 5\nI0512 12:55:50.363520 1646 log.go:172] (0xc000576aa0) (5) Data frame handling\nI0512 12:55:50.363543 1646 log.go:172] (0xc000576aa0) (5) Data frame sent\nI0512 12:55:50.363559 1646 log.go:172] (0xc000898c60) Data frame received for 5\nI0512 12:55:50.363576 1646 log.go:172] (0xc000576aa0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0512 12:55:50.363618 1646 log.go:172] (0xc000898c60) Data frame received for 3\nI0512 12:55:50.363650 1646 log.go:172] (0xc0007cd680) (3) Data frame handling\nI0512 12:55:50.364631 1646 log.go:172] (0xc000898c60) Data frame received for 1\nI0512 12:55:50.364647 1646 log.go:172] (0xc00091c640) (1) Data frame handling\nI0512 12:55:50.364658 1646 log.go:172] (0xc00091c640) (1) Data frame sent\nI0512 12:55:50.364674 1646 log.go:172] (0xc000898c60) (0xc00091c640) Stream removed, broadcasting: 1\nI0512 12:55:50.364689 1646 log.go:172] (0xc000898c60) Go away received\nI0512 12:55:50.364993 1646 log.go:172] (0xc000898c60) (0xc00091c640) Stream removed, broadcasting: 1\nI0512 12:55:50.365009 1646 log.go:172] (0xc000898c60) (0xc0007cd680) Stream removed, broadcasting: 3\nI0512 12:55:50.365017 1646 log.go:172] (0xc000898c60) (0xc000576aa0) Stream removed, broadcasting: 5\n" May 12 12:55:50.370: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 12 12:55:50.370: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 12 12:55:50.370: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3324 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 12:55:50.553: INFO: stderr: "I0512 12:55:50.488982 1666 log.go:172] (0xc000a113f0) (0xc0005485a0) Create stream\nI0512 12:55:50.489086 1666 log.go:172] (0xc000a113f0) (0xc0005485a0) Stream added, broadcasting: 1\nI0512 12:55:50.492659 1666 log.go:172] (0xc000a113f0) Reply frame received for 1\nI0512 12:55:50.492694 1666 log.go:172] (0xc000a113f0) (0xc0007eb5e0) Create stream\nI0512 12:55:50.492707 1666 log.go:172] (0xc000a113f0) (0xc0007eb5e0) Stream added, broadcasting: 3\nI0512 12:55:50.493576 1666 log.go:172] (0xc000a113f0) Reply frame received for 3\nI0512 12:55:50.493616 1666 log.go:172] (0xc000a113f0) (0xc000566a00) Create stream\nI0512 12:55:50.493634 1666 log.go:172] (0xc000a113f0) (0xc000566a00) Stream added, broadcasting: 5\nI0512 12:55:50.494241 1666 log.go:172] (0xc000a113f0) Reply frame received for 5\nI0512 12:55:50.547062 1666 log.go:172] (0xc000a113f0) Data frame received for 3\nI0512 12:55:50.547106 1666 log.go:172] (0xc0007eb5e0) (3) Data frame handling\nI0512 12:55:50.547119 1666 log.go:172] (0xc0007eb5e0) (3) Data frame sent\nI0512 12:55:50.547128 1666 log.go:172] (0xc000a113f0) Data frame received for 3\nI0512 12:55:50.547139 1666 log.go:172] (0xc0007eb5e0) (3) Data frame handling\nI0512 12:55:50.547166 1666 log.go:172] (0xc000a113f0) Data frame received for 5\nI0512 12:55:50.547174 1666 log.go:172] (0xc000566a00) (5) Data frame handling\nI0512 12:55:50.547186 1666 log.go:172] (0xc000566a00) (5) Data frame sent\nI0512 12:55:50.547194 1666 log.go:172] (0xc000a113f0) Data frame received for 5\nI0512 12:55:50.547204 1666 log.go:172] (0xc000566a00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0512 12:55:50.548352 1666 log.go:172] (0xc000a113f0) Data frame received for 1\nI0512 12:55:50.548378 1666 log.go:172] (0xc0005485a0) (1) Data frame handling\nI0512 12:55:50.548397 1666 log.go:172] (0xc0005485a0) (1) Data frame sent\nI0512 12:55:50.548411 1666 log.go:172] (0xc000a113f0) (0xc0005485a0) Stream removed, broadcasting: 1\nI0512 12:55:50.548436 1666 log.go:172] (0xc000a113f0) Go away received\nI0512 12:55:50.548763 1666 log.go:172] (0xc000a113f0) (0xc0005485a0) Stream removed, broadcasting: 1\nI0512 12:55:50.548785 1666 log.go:172] (0xc000a113f0) (0xc0007eb5e0) Stream removed, broadcasting: 3\nI0512 12:55:50.548796 1666 log.go:172] (0xc000a113f0) (0xc000566a00) Stream removed, broadcasting: 5\n" May 12 12:55:50.553: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 12 12:55:50.553: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 12 12:55:50.553: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 May 12 12:56:10.621: INFO: Deleting all statefulset in ns statefulset-3324 May 12 12:56:10.624: INFO: Scaling statefulset ss to 0 May 12 12:56:10.633: INFO: Waiting for statefulset status.replicas updated to 0 May 12 12:56:10.635: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:56:10.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3324" for this suite. • [SLOW TEST:84.988 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":108,"skipped":1657,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:56:10.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs May 12 12:56:10.852: INFO: Waiting up to 5m0s for pod "pod-36d6351e-9e32-4a20-b520-72ddbdddb840" in namespace "emptydir-8484" to be "Succeeded or Failed" May 12 12:56:10.906: INFO: Pod "pod-36d6351e-9e32-4a20-b520-72ddbdddb840": Phase="Pending", Reason="", readiness=false. Elapsed: 53.436554ms May 12 12:56:12.910: INFO: Pod "pod-36d6351e-9e32-4a20-b520-72ddbdddb840": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057527177s May 12 12:56:14.913: INFO: Pod "pod-36d6351e-9e32-4a20-b520-72ddbdddb840": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060954752s May 12 12:56:16.916: INFO: Pod "pod-36d6351e-9e32-4a20-b520-72ddbdddb840": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063270221s STEP: Saw pod success May 12 12:56:16.916: INFO: Pod "pod-36d6351e-9e32-4a20-b520-72ddbdddb840" satisfied condition "Succeeded or Failed" May 12 12:56:16.917: INFO: Trying to get logs from node kali-worker2 pod pod-36d6351e-9e32-4a20-b520-72ddbdddb840 container test-container: STEP: delete the pod May 12 12:56:16.974: INFO: Waiting for pod pod-36d6351e-9e32-4a20-b520-72ddbdddb840 to disappear May 12 12:56:17.027: INFO: Pod pod-36d6351e-9e32-4a20-b520-72ddbdddb840 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:56:17.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8484" for this suite. • [SLOW TEST:6.357 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":109,"skipped":1668,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:56:17.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-4851 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-4851 May 12 12:56:17.255: INFO: Found 0 stateful pods, waiting for 1 May 12 12:56:27.259: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 May 12 12:56:27.297: INFO: Deleting all statefulset in ns statefulset-4851 May 12 12:56:27.423: INFO: Scaling statefulset ss to 0 May 12 12:56:47.475: INFO: Waiting for statefulset status.replicas updated to 0 May 12 12:56:47.478: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:56:47.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4851" for this suite. • [SLOW TEST:30.492 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":110,"skipped":1693,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:56:47.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9056.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-9056.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9056.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-9056.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9056.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9056.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-9056.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9056.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-9056.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9056.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 12:56:57.754: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:56:57.757: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:56:57.759: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:56:57.761: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:56:57.769: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:56:57.771: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:56:57.773: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:56:57.776: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:56:57.781: INFO: Lookups using dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9056.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9056.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local jessie_udp@dns-test-service-2.dns-9056.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9056.svc.cluster.local] May 12 12:57:02.784: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:02.831: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:02.834: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:03.040: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:03.051: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:03.054: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:03.057: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:03.060: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:03.178: INFO: Lookups using dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9056.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9056.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local jessie_udp@dns-test-service-2.dns-9056.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9056.svc.cluster.local] May 12 12:57:07.786: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:07.788: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:07.790: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:07.793: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:07.800: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:07.802: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:07.805: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:07.807: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:07.811: INFO: Lookups using dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9056.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9056.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local jessie_udp@dns-test-service-2.dns-9056.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9056.svc.cluster.local] May 12 12:57:12.786: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:12.790: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:12.794: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:12.797: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:12.806: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:12.809: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:12.812: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:12.815: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:12.822: INFO: Lookups using dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9056.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9056.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local jessie_udp@dns-test-service-2.dns-9056.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9056.svc.cluster.local] May 12 12:57:17.787: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:17.793: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:17.795: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:17.798: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:17.805: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:17.807: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:17.810: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:17.812: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:17.816: INFO: Lookups using dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9056.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9056.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local jessie_udp@dns-test-service-2.dns-9056.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9056.svc.cluster.local] May 12 12:57:22.784: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:22.787: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:22.790: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:22.793: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:22.799: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:22.801: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:22.807: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:22.810: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9056.svc.cluster.local from pod dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c: the server could not find the requested resource (get pods dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c) May 12 12:57:22.813: INFO: Lookups using dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9056.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9056.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9056.svc.cluster.local jessie_udp@dns-test-service-2.dns-9056.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9056.svc.cluster.local] May 12 12:57:27.808: INFO: DNS probes using dns-9056/dns-test-cef9ca05-4eb3-4535-bde0-81d74354f21c succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:57:28.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9056" for this suite. • [SLOW TEST:40.861 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":111,"skipped":1702,"failed":0} S ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:57:28.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars May 12 12:57:28.496: INFO: Waiting up to 5m0s for pod "downward-api-c238275b-7338-49f1-8efb-8c39c824ba49" in namespace "downward-api-6726" to be "Succeeded or Failed" May 12 12:57:28.524: INFO: Pod "downward-api-c238275b-7338-49f1-8efb-8c39c824ba49": Phase="Pending", Reason="", readiness=false. Elapsed: 28.375093ms May 12 12:57:30.788: INFO: Pod "downward-api-c238275b-7338-49f1-8efb-8c39c824ba49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.291674195s May 12 12:57:32.791: INFO: Pod "downward-api-c238275b-7338-49f1-8efb-8c39c824ba49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.294966687s May 12 12:57:34.801: INFO: Pod "downward-api-c238275b-7338-49f1-8efb-8c39c824ba49": Phase="Pending", Reason="", readiness=false. Elapsed: 6.304622639s May 12 12:57:36.804: INFO: Pod "downward-api-c238275b-7338-49f1-8efb-8c39c824ba49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.308139213s STEP: Saw pod success May 12 12:57:36.804: INFO: Pod "downward-api-c238275b-7338-49f1-8efb-8c39c824ba49" satisfied condition "Succeeded or Failed" May 12 12:57:36.806: INFO: Trying to get logs from node kali-worker2 pod downward-api-c238275b-7338-49f1-8efb-8c39c824ba49 container dapi-container: STEP: delete the pod May 12 12:57:37.046: INFO: Waiting for pod downward-api-c238275b-7338-49f1-8efb-8c39c824ba49 to disappear May 12 12:57:37.070: INFO: Pod downward-api-c238275b-7338-49f1-8efb-8c39c824ba49 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:57:37.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6726" for this suite. • [SLOW TEST:8.830 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":112,"skipped":1703,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:57:37.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium May 12 12:57:37.735: INFO: Waiting up to 5m0s for pod "pod-6e0bed4d-17d2-4e0c-bc23-623b158e99b4" in namespace "emptydir-1831" to be "Succeeded or Failed" May 12 12:57:37.902: INFO: Pod "pod-6e0bed4d-17d2-4e0c-bc23-623b158e99b4": Phase="Pending", Reason="", readiness=false. Elapsed: 167.469639ms May 12 12:57:39.907: INFO: Pod "pod-6e0bed4d-17d2-4e0c-bc23-623b158e99b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172212124s May 12 12:57:41.913: INFO: Pod "pod-6e0bed4d-17d2-4e0c-bc23-623b158e99b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178116296s May 12 12:57:43.939: INFO: Pod "pod-6e0bed4d-17d2-4e0c-bc23-623b158e99b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.203829915s STEP: Saw pod success May 12 12:57:43.939: INFO: Pod "pod-6e0bed4d-17d2-4e0c-bc23-623b158e99b4" satisfied condition "Succeeded or Failed" May 12 12:57:43.942: INFO: Trying to get logs from node kali-worker pod pod-6e0bed4d-17d2-4e0c-bc23-623b158e99b4 container test-container: STEP: delete the pod May 12 12:57:44.353: INFO: Waiting for pod pod-6e0bed4d-17d2-4e0c-bc23-623b158e99b4 to disappear May 12 12:57:44.380: INFO: Pod pod-6e0bed4d-17d2-4e0c-bc23-623b158e99b4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:57:44.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1831" for this suite. • [SLOW TEST:7.169 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":113,"skipped":1709,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:57:44.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 12 12:57:45.325: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 12 12:57:47.376: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724885065, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724885065, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724885065, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724885065, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 12:57:50.503: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 12 12:57:50.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:57:52.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-870" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:8.619 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":114,"skipped":1771,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:57:53.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:58:10.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8336" for this suite. • [SLOW TEST:17.666 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":115,"skipped":1798,"failed":0} [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:58:10.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating replication controller my-hostname-basic-868d258b-8894-46d1-84a0-d381d58f6fde May 12 12:58:10.760: INFO: Pod name my-hostname-basic-868d258b-8894-46d1-84a0-d381d58f6fde: Found 0 pods out of 1 May 12 12:58:15.873: INFO: Pod name my-hostname-basic-868d258b-8894-46d1-84a0-d381d58f6fde: Found 1 pods out of 1 May 12 12:58:15.873: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-868d258b-8894-46d1-84a0-d381d58f6fde" are running May 12 12:58:15.917: INFO: Pod "my-hostname-basic-868d258b-8894-46d1-84a0-d381d58f6fde-btlng" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 12:58:10 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 12:58:14 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 12:58:14 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 12:58:10 +0000 UTC Reason: Message:}]) May 12 12:58:15.917: INFO: Trying to dial the pod May 12 12:58:20.927: INFO: Controller my-hostname-basic-868d258b-8894-46d1-84a0-d381d58f6fde: Got expected result from replica 1 [my-hostname-basic-868d258b-8894-46d1-84a0-d381d58f6fde-btlng]: "my-hostname-basic-868d258b-8894-46d1-84a0-d381d58f6fde-btlng", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:58:20.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-931" for this suite. • [SLOW TEST:10.260 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":116,"skipped":1798,"failed":0} SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:58:20.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-703 STEP: creating a selector STEP: Creating the service pods in kubernetes May 12 12:58:21.023: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 12 12:58:21.137: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 12 12:58:23.288: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 12 12:58:25.140: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 12 12:58:27.140: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 12:58:29.140: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 12:58:31.141: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 12:58:33.141: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 12:58:35.140: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 12:58:37.140: INFO: The status of Pod netserver-0 is Running (Ready = true) May 12 12:58:37.144: INFO: The status of Pod netserver-1 is Running (Ready = false) May 12 12:58:39.148: INFO: The status of Pod netserver-1 is Running (Ready = false) May 12 12:58:41.148: INFO: The status of Pod netserver-1 is Running (Ready = false) May 12 12:58:43.173: INFO: The status of Pod netserver-1 is Running (Ready = false) May 12 12:58:45.148: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 12 12:58:55.616: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-703 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 12:58:55.617: INFO: >>> kubeConfig: /root/.kube/config I0512 12:58:55.644027 7 log.go:172] (0xc002b71970) (0xc001a83360) Create stream I0512 12:58:55.644054 7 log.go:172] (0xc002b71970) (0xc001a83360) Stream added, broadcasting: 1 I0512 12:58:55.645806 7 log.go:172] (0xc002b71970) Reply frame received for 1 I0512 12:58:55.645829 7 log.go:172] (0xc002b71970) (0xc002d939a0) Create stream I0512 12:58:55.645837 7 log.go:172] (0xc002b71970) (0xc002d939a0) Stream added, broadcasting: 3 I0512 12:58:55.646631 7 log.go:172] (0xc002b71970) Reply frame received for 3 I0512 12:58:55.646667 7 log.go:172] (0xc002b71970) (0xc002770a00) Create stream I0512 12:58:55.646682 7 log.go:172] (0xc002b71970) (0xc002770a00) Stream added, broadcasting: 5 I0512 12:58:55.647361 7 log.go:172] (0xc002b71970) Reply frame received for 5 I0512 12:58:55.706648 7 log.go:172] (0xc002b71970) Data frame received for 3 I0512 12:58:55.706682 7 log.go:172] (0xc002d939a0) (3) Data frame handling I0512 12:58:55.706692 7 log.go:172] (0xc002d939a0) (3) Data frame sent I0512 12:58:55.706744 7 log.go:172] (0xc002b71970) Data frame received for 5 I0512 12:58:55.706765 7 log.go:172] (0xc002770a00) (5) Data frame handling I0512 12:58:55.706904 7 log.go:172] (0xc002b71970) Data frame received for 3 I0512 12:58:55.706932 7 log.go:172] (0xc002d939a0) (3) Data frame handling I0512 12:58:55.708116 7 log.go:172] (0xc002b71970) Data frame received for 1 I0512 12:58:55.708134 7 log.go:172] (0xc001a83360) (1) Data frame handling I0512 12:58:55.708150 7 log.go:172] (0xc001a83360) (1) Data frame sent I0512 12:58:55.708161 7 log.go:172] (0xc002b71970) (0xc001a83360) Stream removed, broadcasting: 1 I0512 12:58:55.708172 7 log.go:172] (0xc002b71970) Go away received I0512 12:58:55.708276 7 log.go:172] (0xc002b71970) (0xc001a83360) Stream removed, broadcasting: 1 I0512 12:58:55.708291 7 log.go:172] (0xc002b71970) (0xc002d939a0) Stream removed, broadcasting: 3 I0512 12:58:55.708298 7 log.go:172] (0xc002b71970) (0xc002770a00) Stream removed, broadcasting: 5 May 12 12:58:55.708: INFO: Found all expected endpoints: [netserver-0] May 12 12:58:55.711: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.82:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-703 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 12:58:55.711: INFO: >>> kubeConfig: /root/.kube/config I0512 12:58:55.741299 7 log.go:172] (0xc002a540b0) (0xc001a83c20) Create stream I0512 12:58:55.741329 7 log.go:172] (0xc002a540b0) (0xc001a83c20) Stream added, broadcasting: 1 I0512 12:58:55.742820 7 log.go:172] (0xc002a540b0) Reply frame received for 1 I0512 12:58:55.742845 7 log.go:172] (0xc002a540b0) (0xc001a83cc0) Create stream I0512 12:58:55.742855 7 log.go:172] (0xc002a540b0) (0xc001a83cc0) Stream added, broadcasting: 3 I0512 12:58:55.743702 7 log.go:172] (0xc002a540b0) Reply frame received for 3 I0512 12:58:55.743741 7 log.go:172] (0xc002a540b0) (0xc002770aa0) Create stream I0512 12:58:55.743755 7 log.go:172] (0xc002a540b0) (0xc002770aa0) Stream added, broadcasting: 5 I0512 12:58:55.744604 7 log.go:172] (0xc002a540b0) Reply frame received for 5 I0512 12:58:55.817068 7 log.go:172] (0xc002a540b0) Data frame received for 3 I0512 12:58:55.817102 7 log.go:172] (0xc001a83cc0) (3) Data frame handling I0512 12:58:55.817293 7 log.go:172] (0xc001a83cc0) (3) Data frame sent I0512 12:58:55.817497 7 log.go:172] (0xc002a540b0) Data frame received for 5 I0512 12:58:55.817564 7 log.go:172] (0xc002770aa0) (5) Data frame handling I0512 12:58:55.817597 7 log.go:172] (0xc002a540b0) Data frame received for 3 I0512 12:58:55.817616 7 log.go:172] (0xc001a83cc0) (3) Data frame handling I0512 12:58:55.819465 7 log.go:172] (0xc002a540b0) Data frame received for 1 I0512 12:58:55.819481 7 log.go:172] (0xc001a83c20) (1) Data frame handling I0512 12:58:55.819494 7 log.go:172] (0xc001a83c20) (1) Data frame sent I0512 12:58:55.819514 7 log.go:172] (0xc002a540b0) (0xc001a83c20) Stream removed, broadcasting: 1 I0512 12:58:55.819547 7 log.go:172] (0xc002a540b0) Go away received I0512 12:58:55.819597 7 log.go:172] (0xc002a540b0) (0xc001a83c20) Stream removed, broadcasting: 1 I0512 12:58:55.819614 7 log.go:172] (0xc002a540b0) (0xc001a83cc0) Stream removed, broadcasting: 3 I0512 12:58:55.819628 7 log.go:172] (0xc002a540b0) (0xc002770aa0) Stream removed, broadcasting: 5 May 12 12:58:55.819: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:58:55.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-703" for this suite. • [SLOW TEST:35.077 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":117,"skipped":1802,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:58:56.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 12 12:58:59.327: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:59:15.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9302" for this suite. • [SLOW TEST:19.936 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":118,"skipped":1844,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:59:15.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name projected-secret-test-129167d1-a8ec-46d1-9415-5ca0b4551555 STEP: Creating a pod to test consume secrets May 12 12:59:16.272: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e5c06b41-6ba8-4b95-b4b3-52659e25ff29" in namespace "projected-560" to be "Succeeded or Failed" May 12 12:59:16.358: INFO: Pod "pod-projected-secrets-e5c06b41-6ba8-4b95-b4b3-52659e25ff29": Phase="Pending", Reason="", readiness=false. Elapsed: 85.851212ms May 12 12:59:18.361: INFO: Pod "pod-projected-secrets-e5c06b41-6ba8-4b95-b4b3-52659e25ff29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089236214s May 12 12:59:20.367: INFO: Pod "pod-projected-secrets-e5c06b41-6ba8-4b95-b4b3-52659e25ff29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095709979s May 12 12:59:22.413: INFO: Pod "pod-projected-secrets-e5c06b41-6ba8-4b95-b4b3-52659e25ff29": Phase="Running", Reason="", readiness=true. Elapsed: 6.141617818s May 12 12:59:24.417: INFO: Pod "pod-projected-secrets-e5c06b41-6ba8-4b95-b4b3-52659e25ff29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.145561773s STEP: Saw pod success May 12 12:59:24.417: INFO: Pod "pod-projected-secrets-e5c06b41-6ba8-4b95-b4b3-52659e25ff29" satisfied condition "Succeeded or Failed" May 12 12:59:24.421: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-e5c06b41-6ba8-4b95-b4b3-52659e25ff29 container secret-volume-test: STEP: delete the pod May 12 12:59:24.482: INFO: Waiting for pod pod-projected-secrets-e5c06b41-6ba8-4b95-b4b3-52659e25ff29 to disappear May 12 12:59:24.562: INFO: Pod pod-projected-secrets-e5c06b41-6ba8-4b95-b4b3-52659e25ff29 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:59:24.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-560" for this suite. • [SLOW TEST:8.623 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":119,"skipped":1878,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:59:24.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 12 12:59:31.581: INFO: 10 pods remaining May 12 12:59:31.581: INFO: 10 pods has nil DeletionTimestamp May 12 12:59:31.581: INFO: May 12 12:59:33.294: INFO: 9 pods remaining May 12 12:59:33.294: INFO: 0 pods has nil DeletionTimestamp May 12 12:59:33.294: INFO: May 12 12:59:34.558: INFO: 0 pods remaining May 12 12:59:34.558: INFO: 0 pods has nil DeletionTimestamp May 12 12:59:34.558: INFO: May 12 12:59:35.766: INFO: 0 pods remaining May 12 12:59:35.766: INFO: 0 pods has nil DeletionTimestamp May 12 12:59:35.766: INFO: STEP: Gathering metrics W0512 12:59:37.161984 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 12:59:37.162: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:59:37.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5977" for this suite. • [SLOW TEST:12.922 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":120,"skipped":1914,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:59:37.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars May 12 12:59:38.162: INFO: Waiting up to 5m0s for pod "downward-api-5d79dd76-e7ec-4723-b904-51659e122b36" in namespace "downward-api-399" to be "Succeeded or Failed" May 12 12:59:38.193: INFO: Pod "downward-api-5d79dd76-e7ec-4723-b904-51659e122b36": Phase="Pending", Reason="", readiness=false. Elapsed: 30.992689ms May 12 12:59:40.197: INFO: Pod "downward-api-5d79dd76-e7ec-4723-b904-51659e122b36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034754994s May 12 12:59:42.200: INFO: Pod "downward-api-5d79dd76-e7ec-4723-b904-51659e122b36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038192693s STEP: Saw pod success May 12 12:59:42.200: INFO: Pod "downward-api-5d79dd76-e7ec-4723-b904-51659e122b36" satisfied condition "Succeeded or Failed" May 12 12:59:42.203: INFO: Trying to get logs from node kali-worker pod downward-api-5d79dd76-e7ec-4723-b904-51659e122b36 container dapi-container: STEP: delete the pod May 12 12:59:42.681: INFO: Waiting for pod downward-api-5d79dd76-e7ec-4723-b904-51659e122b36 to disappear May 12 12:59:42.729: INFO: Pod downward-api-5d79dd76-e7ec-4723-b904-51659e122b36 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:59:42.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-399" for this suite. • [SLOW TEST:5.313 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":121,"skipped":1932,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:59:42.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap that has name configmap-test-emptyKey-1b14a0c9-e6d7-4d5d-b11e-891ca3fa6452 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:59:43.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2216" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":122,"skipped":1950,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:59:43.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 12 12:59:43.562: INFO: Created pod &Pod{ObjectMeta:{dns-4073 dns-4073 /api/v1/namespaces/dns-4073/pods/dns-4073 89830d50-40a9-487c-9060-09fe96c1cd6b 3728463 0 2020-05-12 12:59:43 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-05-12 12:59:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 67 111 110 102 105 103 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 115 101 114 118 101 114 115 34 58 123 125 44 34 102 58 115 101 97 114 99 104 101 115 34 58 123 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xm9d2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xm9d2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xm9d2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:59:43.627: INFO: The status of Pod dns-4073 is Pending, waiting for it to be Running (with Ready = true) May 12 12:59:45.666: INFO: The status of Pod dns-4073 is Pending, waiting for it to be Running (with Ready = true) May 12 12:59:47.671: INFO: The status of Pod dns-4073 is Pending, waiting for it to be Running (with Ready = true) May 12 12:59:49.709: INFO: The status of Pod dns-4073 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... May 12 12:59:49.709: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-4073 PodName:dns-4073 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 12:59:49.709: INFO: >>> kubeConfig: /root/.kube/config I0512 12:59:49.737641 7 log.go:172] (0xc002e586e0) (0xc00236b0e0) Create stream I0512 12:59:49.737677 7 log.go:172] (0xc002e586e0) (0xc00236b0e0) Stream added, broadcasting: 1 I0512 12:59:49.739367 7 log.go:172] (0xc002e586e0) Reply frame received for 1 I0512 12:59:49.739397 7 log.go:172] (0xc002e586e0) (0xc001dd60a0) Create stream I0512 12:59:49.739408 7 log.go:172] (0xc002e586e0) (0xc001dd60a0) Stream added, broadcasting: 3 I0512 12:59:49.740144 7 log.go:172] (0xc002e586e0) Reply frame received for 3 I0512 12:59:49.740180 7 log.go:172] (0xc002e586e0) (0xc001eb4a00) Create stream I0512 12:59:49.740190 7 log.go:172] (0xc002e586e0) (0xc001eb4a00) Stream added, broadcasting: 5 I0512 12:59:49.741085 7 log.go:172] (0xc002e586e0) Reply frame received for 5 I0512 12:59:50.091283 7 log.go:172] (0xc002e586e0) Data frame received for 3 I0512 12:59:50.091335 7 log.go:172] (0xc001dd60a0) (3) Data frame handling I0512 12:59:50.091375 7 log.go:172] (0xc001dd60a0) (3) Data frame sent I0512 12:59:50.093918 7 log.go:172] (0xc002e586e0) Data frame received for 3 I0512 12:59:50.093944 7 log.go:172] (0xc001dd60a0) (3) Data frame handling I0512 12:59:50.093977 7 log.go:172] (0xc002e586e0) Data frame received for 5 I0512 12:59:50.094021 7 log.go:172] (0xc001eb4a00) (5) Data frame handling I0512 12:59:50.096459 7 log.go:172] (0xc002e586e0) Data frame received for 1 I0512 12:59:50.096482 7 log.go:172] (0xc00236b0e0) (1) Data frame handling I0512 12:59:50.096493 7 log.go:172] (0xc00236b0e0) (1) Data frame sent I0512 12:59:50.096509 7 log.go:172] (0xc002e586e0) (0xc00236b0e0) Stream removed, broadcasting: 1 I0512 12:59:50.096527 7 log.go:172] (0xc002e586e0) Go away received I0512 12:59:50.096773 7 log.go:172] (0xc002e586e0) (0xc00236b0e0) Stream removed, broadcasting: 1 I0512 12:59:50.096795 7 log.go:172] (0xc002e586e0) (0xc001dd60a0) Stream removed, broadcasting: 3 I0512 12:59:50.096808 7 log.go:172] (0xc002e586e0) (0xc001eb4a00) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 12 12:59:50.096: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-4073 PodName:dns-4073 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 12:59:50.096: INFO: >>> kubeConfig: /root/.kube/config I0512 12:59:50.335106 7 log.go:172] (0xc002e58d10) (0xc00236b2c0) Create stream I0512 12:59:50.335151 7 log.go:172] (0xc002e58d10) (0xc00236b2c0) Stream added, broadcasting: 1 I0512 12:59:50.337245 7 log.go:172] (0xc002e58d10) Reply frame received for 1 I0512 12:59:50.337318 7 log.go:172] (0xc002e58d10) (0xc002858000) Create stream I0512 12:59:50.337341 7 log.go:172] (0xc002e58d10) (0xc002858000) Stream added, broadcasting: 3 I0512 12:59:50.338692 7 log.go:172] (0xc002e58d10) Reply frame received for 3 I0512 12:59:50.338753 7 log.go:172] (0xc002e58d10) (0xc001eb4b40) Create stream I0512 12:59:50.338771 7 log.go:172] (0xc002e58d10) (0xc001eb4b40) Stream added, broadcasting: 5 I0512 12:59:50.339839 7 log.go:172] (0xc002e58d10) Reply frame received for 5 I0512 12:59:50.416537 7 log.go:172] (0xc002e58d10) Data frame received for 3 I0512 12:59:50.416563 7 log.go:172] (0xc002858000) (3) Data frame handling I0512 12:59:50.416582 7 log.go:172] (0xc002858000) (3) Data frame sent I0512 12:59:50.417964 7 log.go:172] (0xc002e58d10) Data frame received for 3 I0512 12:59:50.418000 7 log.go:172] (0xc002858000) (3) Data frame handling I0512 12:59:50.418032 7 log.go:172] (0xc002e58d10) Data frame received for 5 I0512 12:59:50.418047 7 log.go:172] (0xc001eb4b40) (5) Data frame handling I0512 12:59:50.419318 7 log.go:172] (0xc002e58d10) Data frame received for 1 I0512 12:59:50.419335 7 log.go:172] (0xc00236b2c0) (1) Data frame handling I0512 12:59:50.419354 7 log.go:172] (0xc00236b2c0) (1) Data frame sent I0512 12:59:50.419367 7 log.go:172] (0xc002e58d10) (0xc00236b2c0) Stream removed, broadcasting: 1 I0512 12:59:50.419384 7 log.go:172] (0xc002e58d10) Go away received I0512 12:59:50.419431 7 log.go:172] (0xc002e58d10) (0xc00236b2c0) Stream removed, broadcasting: 1 I0512 12:59:50.419451 7 log.go:172] (0xc002e58d10) (0xc002858000) Stream removed, broadcasting: 3 I0512 12:59:50.419468 7 log.go:172] (0xc002e58d10) (0xc001eb4b40) Stream removed, broadcasting: 5 May 12 12:59:50.419: INFO: Deleting pod dns-4073... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:59:50.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4073" for this suite. • [SLOW TEST:8.362 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":123,"skipped":1961,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:59:51.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0512 12:59:56.151818 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 12:59:56.151: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 12:59:56.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3186" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":124,"skipped":2017,"failed":0} SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 12:59:56.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 12 13:00:03.413: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 13:00:03.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3603" for this suite. • [SLOW TEST:7.999 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":125,"skipped":2019,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 13:00:04.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 12 13:00:05.052: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2278c748-ec0e-43b4-89d4-c765913b11ce" in namespace "projected-6170" to be "Succeeded or Failed" May 12 13:00:05.116: INFO: Pod "downwardapi-volume-2278c748-ec0e-43b4-89d4-c765913b11ce": Phase="Pending", Reason="", readiness=false. Elapsed: 64.545685ms May 12 13:00:07.121: INFO: Pod "downwardapi-volume-2278c748-ec0e-43b4-89d4-c765913b11ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069219397s May 12 13:00:09.124: INFO: Pod "downwardapi-volume-2278c748-ec0e-43b4-89d4-c765913b11ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072602486s May 12 13:00:11.127: INFO: Pod "downwardapi-volume-2278c748-ec0e-43b4-89d4-c765913b11ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.075796252s STEP: Saw pod success May 12 13:00:11.127: INFO: Pod "downwardapi-volume-2278c748-ec0e-43b4-89d4-c765913b11ce" satisfied condition "Succeeded or Failed" May 12 13:00:11.130: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-2278c748-ec0e-43b4-89d4-c765913b11ce container client-container: STEP: delete the pod May 12 13:00:11.199: INFO: Waiting for pod downwardapi-volume-2278c748-ec0e-43b4-89d4-c765913b11ce to disappear May 12 13:00:11.208: INFO: Pod downwardapi-volume-2278c748-ec0e-43b4-89d4-c765913b11ce no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 13:00:11.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6170" for this suite. • [SLOW TEST:7.057 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":126,"skipped":2030,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 13:00:11.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 13:00:11.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2733" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":127,"skipped":2041,"failed":0} SSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 13:00:11.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 12 13:00:11.426: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-8888 I0512 13:00:11.471163 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8888, replica count: 1 I0512 13:00:12.521500 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 13:00:13.521743 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 13:00:14.521949 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 13:00:15.522278 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 12 13:00:15.655: INFO: Created: latency-svc-jcvph May 12 13:00:15.732: INFO: Got endpoints: latency-svc-jcvph [109.759425ms] May 12 13:00:15.819: INFO: Created: latency-svc-59h2w May 12 13:00:15.881: INFO: Got endpoints: latency-svc-59h2w [149.008822ms] May 12 13:00:15.889: INFO: Created: latency-svc-lhwrs May 12 13:00:15.906: INFO: Got endpoints: latency-svc-lhwrs [173.71598ms] May 12 13:00:15.938: INFO: Created: latency-svc-nm2h5 May 12 13:00:15.956: INFO: Got endpoints: latency-svc-nm2h5 [224.358352ms] May 12 13:00:16.054: INFO: Created: latency-svc-jxwkf May 12 13:00:16.104: INFO: Got endpoints: latency-svc-jxwkf [372.13561ms] May 12 13:00:16.135: INFO: Created: latency-svc-d68tp May 12 13:00:16.155: INFO: Got endpoints: latency-svc-d68tp [423.013137ms] May 12 13:00:16.221: INFO: Created: latency-svc-7nst2 May 12 13:00:16.251: INFO: Got endpoints: latency-svc-7nst2 [518.734502ms] May 12 13:00:16.297: INFO: Created: latency-svc-vhd7d May 12 13:00:16.303: INFO: Got endpoints: latency-svc-vhd7d [570.938756ms] May 12 13:00:16.377: INFO: Created: latency-svc-zbn8d May 12 13:00:16.394: INFO: Got endpoints: latency-svc-zbn8d [662.243991ms] May 12 13:00:16.443: INFO: Created: latency-svc-fxc4w May 12 13:00:16.486: INFO: Got endpoints: latency-svc-fxc4w [753.6457ms] May 12 13:00:16.561: INFO: Created: latency-svc-bsbfp May 12 13:00:16.612: INFO: Got endpoints: latency-svc-bsbfp [879.675329ms] May 12 13:00:16.665: INFO: Created: latency-svc-mmfzf May 12 13:00:16.681: INFO: Got endpoints: latency-svc-mmfzf [949.381595ms] May 12 13:00:16.774: INFO: Created: latency-svc-knwfl May 12 13:00:16.802: INFO: Got endpoints: latency-svc-knwfl [1.070314673s] May 12 13:00:16.838: INFO: Created: latency-svc-ndcrx May 12 13:00:16.916: INFO: Got endpoints: latency-svc-ndcrx [1.184207315s] May 12 13:00:16.957: INFO: Created: latency-svc-q96pr May 12 13:00:16.970: INFO: Got endpoints: latency-svc-q96pr [1.238387214s] May 12 13:00:17.066: INFO: Created: latency-svc-2s4pw May 12 13:00:17.091: INFO: Got endpoints: latency-svc-2s4pw [1.358441754s] May 12 13:00:17.150: INFO: Created: latency-svc-st2pp May 12 13:00:17.214: INFO: Got endpoints: latency-svc-st2pp [1.332871768s] May 12 13:00:17.270: INFO: Created: latency-svc-dbh8v May 12 13:00:17.284: INFO: Got endpoints: latency-svc-dbh8v [1.378729951s] May 12 13:00:17.378: INFO: Created: latency-svc-fl5qm May 12 13:00:17.387: INFO: Got endpoints: latency-svc-fl5qm [1.430911162s] May 12 13:00:17.444: INFO: Created: latency-svc-bmqnq May 12 13:00:17.459: INFO: Got endpoints: latency-svc-bmqnq [1.355211078s] May 12 13:00:17.528: INFO: Created: latency-svc-74rf4 May 12 13:00:17.532: INFO: Got endpoints: latency-svc-74rf4 [1.37733673s] May 12 13:00:17.564: INFO: Created: latency-svc-rp6wq May 12 13:00:17.580: INFO: Got endpoints: latency-svc-rp6wq [1.328807049s] May 12 13:00:17.665: INFO: Created: latency-svc-l2wvh May 12 13:00:17.696: INFO: Got endpoints: latency-svc-l2wvh [1.392819276s] May 12 13:00:17.751: INFO: Created: latency-svc-flscm May 12 13:00:17.906: INFO: Got endpoints: latency-svc-flscm [1.511198523s] May 12 13:00:18.073: INFO: Created: latency-svc-9jstj May 12 13:00:18.139: INFO: Got endpoints: latency-svc-9jstj [1.653032041s] May 12 13:00:18.246: INFO: Created: latency-svc-qp4bc May 12 13:00:18.278: INFO: Got endpoints: latency-svc-qp4bc [1.666069236s] May 12 13:00:18.483: INFO: Created: latency-svc-grv2j May 12 13:00:18.556: INFO: Got endpoints: latency-svc-grv2j [1.875055631s] May 12 13:00:18.557: INFO: Created: latency-svc-jhmkp May 12 13:00:18.629: INFO: Got endpoints: latency-svc-jhmkp [1.826960471s] May 12 13:00:18.699: INFO: Created: latency-svc-sl957 May 12 13:00:18.785: INFO: Got endpoints: latency-svc-sl957 [1.868927924s] May 12 13:00:18.822: INFO: Created: latency-svc-56xdp May 12 13:00:18.841: INFO: Got endpoints: latency-svc-56xdp [1.870922689s] May 12 13:00:18.953: INFO: Created: latency-svc-nc2zh May 12 13:00:18.956: INFO: Got endpoints: latency-svc-nc2zh [1.865808678s] May 12 13:00:19.035: INFO: Created: latency-svc-2rvcl May 12 13:00:19.091: INFO: Got endpoints: latency-svc-2rvcl [1.876782444s] May 12 13:00:19.126: INFO: Created: latency-svc-4wlwg May 12 13:00:19.146: INFO: Got endpoints: latency-svc-4wlwg [1.861716516s] May 12 13:00:19.187: INFO: Created: latency-svc-5xt8s May 12 13:00:19.230: INFO: Got endpoints: latency-svc-5xt8s [1.842229229s] May 12 13:00:19.276: INFO: Created: latency-svc-rw5x7 May 12 13:00:19.291: INFO: Got endpoints: latency-svc-rw5x7 [1.831485698s] May 12 13:00:19.318: INFO: Created: latency-svc-zgswb May 12 13:00:19.372: INFO: Got endpoints: latency-svc-zgswb [1.839384945s] May 12 13:00:19.418: INFO: Created: latency-svc-xdp4h May 12 13:00:19.438: INFO: Got endpoints: latency-svc-xdp4h [1.857909496s] May 12 13:00:19.528: INFO: Created: latency-svc-sgb84 May 12 13:00:19.534: INFO: Got endpoints: latency-svc-sgb84 [1.837927736s] May 12 13:00:19.587: INFO: Created: latency-svc-bbtl6 May 12 13:00:19.607: INFO: Got endpoints: latency-svc-bbtl6 [1.700980292s] May 12 13:00:19.672: INFO: Created: latency-svc-wz4td May 12 13:00:19.684: INFO: Got endpoints: latency-svc-wz4td [1.545389274s] May 12 13:00:19.711: INFO: Created: latency-svc-5zmcd May 12 13:00:19.743: INFO: Got endpoints: latency-svc-5zmcd [1.465123851s] May 12 13:00:19.815: INFO: Created: latency-svc-gqdxh May 12 13:00:19.816: INFO: Got endpoints: latency-svc-gqdxh [1.259383566s] May 12 13:00:19.852: INFO: Created: latency-svc-m977q May 12 13:00:19.866: INFO: Got endpoints: latency-svc-m977q [1.236148527s] May 12 13:00:19.970: INFO: Created: latency-svc-chp5l May 12 13:00:19.979: INFO: Got endpoints: latency-svc-chp5l [1.194091851s] May 12 13:00:20.006: INFO: Created: latency-svc-4h9tt May 12 13:00:20.022: INFO: Got endpoints: latency-svc-4h9tt [1.180123665s] May 12 13:00:20.103: INFO: Created: latency-svc-zqgrk May 12 13:00:20.116: INFO: Got endpoints: latency-svc-zqgrk [1.159795396s] May 12 13:00:20.167: INFO: Created: latency-svc-lvb67 May 12 13:00:20.177: INFO: Got endpoints: latency-svc-lvb67 [1.085921841s] May 12 13:00:20.241: INFO: Created: latency-svc-bcqdt May 12 13:00:20.271: INFO: Got endpoints: latency-svc-bcqdt [1.124538172s] May 12 13:00:20.340: INFO: Created: latency-svc-zfmdd May 12 13:00:20.393: INFO: Got endpoints: latency-svc-zfmdd [1.163685237s] May 12 13:00:20.456: INFO: Created: latency-svc-2crzw May 12 13:00:20.471: INFO: Got endpoints: latency-svc-2crzw [1.180291555s] May 12 13:00:20.535: INFO: Created: latency-svc-cfjqs May 12 13:00:20.556: INFO: Got endpoints: latency-svc-cfjqs [1.184564178s] May 12 13:00:20.659: INFO: Created: latency-svc-nkbbt May 12 13:00:20.676: INFO: Got endpoints: latency-svc-nkbbt [1.238377697s] May 12 13:00:20.708: INFO: Created: latency-svc-g8bfz May 12 13:00:20.718: INFO: Got endpoints: latency-svc-g8bfz [1.184399758s] May 12 13:00:20.839: INFO: Created: latency-svc-g4zjv May 12 13:00:20.895: INFO: Got endpoints: latency-svc-g4zjv [1.288058867s] May 12 13:00:20.895: INFO: Created: latency-svc-tspgf May 12 13:00:21.062: INFO: Got endpoints: latency-svc-tspgf [1.377707333s] May 12 13:00:21.092: INFO: Created: latency-svc-7ws9v May 12 13:00:21.123: INFO: Got endpoints: latency-svc-7ws9v [1.380378197s] May 12 13:00:21.274: INFO: Created: latency-svc-j9qwt May 12 13:00:21.303: INFO: Got endpoints: latency-svc-j9qwt [1.487186218s] May 12 13:00:21.332: INFO: Created: latency-svc-79hd7 May 12 13:00:21.352: INFO: Got endpoints: latency-svc-79hd7 [1.486006869s] May 12 13:00:21.438: INFO: Created: latency-svc-7n7s5 May 12 13:00:21.465: INFO: Got endpoints: latency-svc-7n7s5 [1.48508925s] May 12 13:00:21.465: INFO: Created: latency-svc-ctnbd May 12 13:00:21.485: INFO: Got endpoints: latency-svc-ctnbd [1.463817356s] May 12 13:00:21.512: INFO: Created: latency-svc-jff7z May 12 13:00:21.582: INFO: Got endpoints: latency-svc-jff7z [1.465728737s] May 12 13:00:21.603: INFO: Created: latency-svc-j2xbw May 12 13:00:21.618: INFO: Got endpoints: latency-svc-j2xbw [1.441518346s] May 12 13:00:21.644: INFO: Created: latency-svc-95z4v May 12 13:00:21.661: INFO: Got endpoints: latency-svc-95z4v [1.389860215s] May 12 13:00:21.719: INFO: Created: latency-svc-4z4w6 May 12 13:00:21.740: INFO: Got endpoints: latency-svc-4z4w6 [1.346711167s] May 12 13:00:21.771: INFO: Created: latency-svc-zrtdp May 12 13:00:21.781: INFO: Got endpoints: latency-svc-zrtdp [1.309887373s] May 12 13:00:21.807: INFO: Created: latency-svc-4xhmb May 12 13:00:21.851: INFO: Got endpoints: latency-svc-4xhmb [1.294732627s] May 12 13:00:21.926: INFO: Created: latency-svc-v5mtx May 12 13:00:21.995: INFO: Got endpoints: latency-svc-v5mtx [1.318836797s] May 12 13:00:22.040: INFO: Created: latency-svc-wcg42 May 12 13:00:22.067: INFO: Got endpoints: latency-svc-wcg42 [1.349053882s] May 12 13:00:22.148: INFO: Created: latency-svc-5x56n May 12 13:00:22.187: INFO: Got endpoints: latency-svc-5x56n [1.292360434s] May 12 13:00:22.324: INFO: Created: latency-svc-gb55q May 12 13:00:22.377: INFO: Got endpoints: latency-svc-gb55q [1.314561663s] May 12 13:00:22.401: INFO: Created: latency-svc-6rw8q May 12 13:00:22.403: INFO: Got endpoints: latency-svc-6rw8q [1.279711703s] May 12 13:00:22.581: INFO: Created: latency-svc-gdkst May 12 13:00:22.655: INFO: Got endpoints: latency-svc-gdkst [1.351837228s] May 12 13:00:22.774: INFO: Created: latency-svc-hww7g May 12 13:00:22.791: INFO: Got endpoints: latency-svc-hww7g [1.43919677s] May 12 13:00:22.834: INFO: Created: latency-svc-7ckn2 May 12 13:00:22.866: INFO: Got endpoints: latency-svc-7ckn2 [1.401016957s] May 12 13:00:22.929: INFO: Created: latency-svc-fdflf May 12 13:00:22.989: INFO: Got endpoints: latency-svc-fdflf [1.503616605s] May 12 13:00:23.048: INFO: Created: latency-svc-s9xgl May 12 13:00:23.052: INFO: Got endpoints: latency-svc-s9xgl [1.46960334s] May 12 13:00:23.097: INFO: Created: latency-svc-7k2lt May 12 13:00:23.119: INFO: Got endpoints: latency-svc-7k2lt [1.500448009s] May 12 13:00:23.192: INFO: Created: latency-svc-p9zl2 May 12 13:00:23.212: INFO: Got endpoints: latency-svc-p9zl2 [1.551422206s] May 12 13:00:23.374: INFO: Created: latency-svc-g5rbn May 12 13:00:23.378: INFO: Got endpoints: latency-svc-g5rbn [1.637934348s] May 12 13:00:23.404: INFO: Created: latency-svc-5gnvl May 12 13:00:23.419: INFO: Got endpoints: latency-svc-5gnvl [1.638371316s] May 12 13:00:23.527: INFO: Created: latency-svc-dwpxk May 12 13:00:23.546: INFO: Got endpoints: latency-svc-dwpxk [1.6948736s] May 12 13:00:23.615: INFO: Created: latency-svc-rzwvj May 12 13:00:23.671: INFO: Got endpoints: latency-svc-rzwvj [1.676478998s] May 12 13:00:23.699: INFO: Created: latency-svc-8wc6z May 12 13:00:23.720: INFO: Got endpoints: latency-svc-8wc6z [1.653032572s] May 12 13:00:23.823: INFO: Created: latency-svc-mdnn4 May 12 13:00:23.826: INFO: Got endpoints: latency-svc-mdnn4 [1.638844308s] May 12 13:00:23.885: INFO: Created: latency-svc-n4vnz May 12 13:00:23.901: INFO: Got endpoints: latency-svc-n4vnz [1.523891986s] May 12 13:00:23.971: INFO: Created: latency-svc-2jkls May 12 13:00:23.975: INFO: Got endpoints: latency-svc-2jkls [1.571445286s] May 12 13:00:24.015: INFO: Created: latency-svc-mtt2t May 12 13:00:24.163: INFO: Got endpoints: latency-svc-mtt2t [1.507596146s] May 12 13:00:24.164: INFO: Created: latency-svc-mpwtb May 12 13:00:24.214: INFO: Got endpoints: latency-svc-mpwtb [1.423271563s] May 12 13:00:24.250: INFO: Created: latency-svc-hq95r May 12 13:00:24.420: INFO: Got endpoints: latency-svc-hq95r [1.553684982s] May 12 13:00:24.460: INFO: Created: latency-svc-pzqhf May 12 13:00:24.491: INFO: Got endpoints: latency-svc-pzqhf [1.501388435s] May 12 13:00:24.576: INFO: Created: latency-svc-jfrr8 May 12 13:00:24.647: INFO: Got endpoints: latency-svc-jfrr8 [1.595347917s] May 12 13:00:24.648: INFO: Created: latency-svc-jdn6h May 12 13:00:24.743: INFO: Got endpoints: latency-svc-jdn6h [1.624298902s] May 12 13:00:24.767: INFO: Created: latency-svc-xb6h5 May 12 13:00:24.810: INFO: Got endpoints: latency-svc-xb6h5 [1.597389232s] May 12 13:00:24.941: INFO: Created: latency-svc-9cwmh May 12 13:00:24.971: INFO: Got endpoints: latency-svc-9cwmh [1.593335194s] May 12 13:00:25.122: INFO: Created: latency-svc-zg76f May 12 13:00:25.126: INFO: Got endpoints: latency-svc-zg76f [1.706873384s] May 12 13:00:25.264: INFO: Created: latency-svc-jsmlc May 12 13:00:25.320: INFO: Got endpoints: latency-svc-jsmlc [1.774306026s] May 12 13:00:25.362: INFO: Created: latency-svc-lshxr May 12 13:00:25.406: INFO: Got endpoints: latency-svc-lshxr [1.734457489s] May 12 13:00:25.433: INFO: Created: latency-svc-kl7sw May 12 13:00:25.446: INFO: Got endpoints: latency-svc-kl7sw [1.726061373s] May 12 13:00:25.475: INFO: Created: latency-svc-pssm5 May 12 13:00:25.483: INFO: Got endpoints: latency-svc-pssm5 [1.656913365s] May 12 13:00:25.540: INFO: Created: latency-svc-jbpk8 May 12 13:00:25.566: INFO: Got endpoints: latency-svc-jbpk8 [1.664997323s] May 12 13:00:25.606: INFO: Created: latency-svc-2rfxv May 12 13:00:25.628: INFO: Got endpoints: latency-svc-2rfxv [1.652991276s] May 12 13:00:25.708: INFO: Created: latency-svc-t5qg7 May 12 13:00:25.734: INFO: Created: latency-svc-drvpq May 12 13:00:25.735: INFO: Got endpoints: latency-svc-t5qg7 [1.572005511s] May 12 13:00:25.764: INFO: Got endpoints: latency-svc-drvpq [1.549638632s] May 12 13:00:25.845: INFO: Created: latency-svc-gjc6k May 12 13:00:25.848: INFO: Got endpoints: latency-svc-gjc6k [1.428586797s] May 12 13:00:25.876: INFO: Created: latency-svc-wtb4x May 12 13:00:25.906: INFO: Got endpoints: latency-svc-wtb4x [1.415325273s] May 12 13:00:25.936: INFO: Created: latency-svc-rqs2g May 12 13:00:25.983: INFO: Got endpoints: latency-svc-rqs2g [1.335535347s] May 12 13:00:25.991: INFO: Created: latency-svc-f8phh May 12 13:00:26.008: INFO: Got endpoints: latency-svc-f8phh [1.26516578s] May 12 13:00:26.039: INFO: Created: latency-svc-stdm6 May 12 13:00:26.057: INFO: Got endpoints: latency-svc-stdm6 [1.247434331s] May 12 13:00:26.082: INFO: Created: latency-svc-mtdzb May 12 13:00:26.139: INFO: Got endpoints: latency-svc-mtdzb [1.167451325s] May 12 13:00:26.158: INFO: Created: latency-svc-8dgd4 May 12 13:00:26.173: INFO: Got endpoints: latency-svc-8dgd4 [1.04680448s] May 12 13:00:26.195: INFO: Created: latency-svc-c68db May 12 13:00:26.214: INFO: Got endpoints: latency-svc-c68db [893.326836ms] May 12 13:00:26.270: INFO: Created: latency-svc-kz4qr May 12 13:00:26.281: INFO: Got endpoints: latency-svc-kz4qr [875.034112ms] May 12 13:00:26.302: INFO: Created: latency-svc-4tmnt May 12 13:00:26.326: INFO: Got endpoints: latency-svc-4tmnt [879.480074ms] May 12 13:00:26.362: INFO: Created: latency-svc-pwwr6 May 12 13:00:26.407: INFO: Got endpoints: latency-svc-pwwr6 [924.551174ms] May 12 13:00:26.453: INFO: Created: latency-svc-844wz May 12 13:00:26.472: INFO: Got endpoints: latency-svc-844wz [906.291109ms] May 12 13:00:26.495: INFO: Created: latency-svc-7q2ht May 12 13:00:26.599: INFO: Got endpoints: latency-svc-7q2ht [971.714828ms] May 12 13:00:26.602: INFO: Created: latency-svc-vp5mj May 12 13:00:26.610: INFO: Got endpoints: latency-svc-vp5mj [875.245395ms] May 12 13:00:26.638: INFO: Created: latency-svc-n4bhp May 12 13:00:26.681: INFO: Got endpoints: latency-svc-n4bhp [916.881557ms] May 12 13:00:26.756: INFO: Created: latency-svc-zz8bl May 12 13:00:26.761: INFO: Got endpoints: latency-svc-zz8bl [912.977097ms] May 12 13:00:26.824: INFO: Created: latency-svc-84sws May 12 13:00:26.905: INFO: Got endpoints: latency-svc-84sws [998.984746ms] May 12 13:00:26.921: INFO: Created: latency-svc-bzqjc May 12 13:00:26.973: INFO: Got endpoints: latency-svc-bzqjc [990.202488ms] May 12 13:00:27.140: INFO: Created: latency-svc-h5cwq May 12 13:00:27.158: INFO: Got endpoints: latency-svc-h5cwq [1.150229588s] May 12 13:00:27.408: INFO: Created: latency-svc-z7zbx May 12 13:00:27.464: INFO: Got endpoints: latency-svc-z7zbx [1.406974775s] May 12 13:00:27.630: INFO: Created: latency-svc-qkg8z May 12 13:00:27.679: INFO: Got endpoints: latency-svc-qkg8z [1.539587932s] May 12 13:00:27.726: INFO: Created: latency-svc-wfqjs May 12 13:00:27.828: INFO: Got endpoints: latency-svc-wfqjs [1.654350079s] May 12 13:00:27.834: INFO: Created: latency-svc-dlhk8 May 12 13:00:27.861: INFO: Got endpoints: latency-svc-dlhk8 [1.647043594s] May 12 13:00:28.145: INFO: Created: latency-svc-ljtkk May 12 13:00:28.203: INFO: Got endpoints: latency-svc-ljtkk [1.92202247s] May 12 13:00:28.704: INFO: Created: latency-svc-xss6g May 12 13:00:28.707: INFO: Got endpoints: latency-svc-xss6g [2.381288501s] May 12 13:00:29.116: INFO: Created: latency-svc-q4wqv May 12 13:00:29.270: INFO: Got endpoints: latency-svc-q4wqv [2.862657096s] May 12 13:00:29.490: INFO: Created: latency-svc-tnkbq May 12 13:00:29.629: INFO: Got endpoints: latency-svc-tnkbq [3.157117598s] May 12 13:00:29.973: INFO: Created: latency-svc-24rlt May 12 13:00:30.023: INFO: Got endpoints: latency-svc-24rlt [3.423200828s] May 12 13:00:30.059: INFO: Created: latency-svc-7hgrc May 12 13:00:30.186: INFO: Got endpoints: latency-svc-7hgrc [3.576213493s] May 12 13:00:30.229: INFO: Created: latency-svc-n529p May 12 13:00:30.243: INFO: Got endpoints: latency-svc-n529p [3.562220358s] May 12 13:00:30.336: INFO: Created: latency-svc-57n2c May 12 13:00:30.352: INFO: Got endpoints: latency-svc-57n2c [3.591110193s] May 12 13:00:30.383: INFO: Created: latency-svc-xp2th May 12 13:00:30.393: INFO: Got endpoints: latency-svc-xp2th [3.488413461s] May 12 13:00:30.420: INFO: Created: latency-svc-ngqn5 May 12 13:00:30.474: INFO: Got endpoints: latency-svc-ngqn5 [3.500572161s] May 12 13:00:30.478: INFO: Created: latency-svc-q26jf May 12 13:00:30.509: INFO: Got endpoints: latency-svc-q26jf [3.350709013s] May 12 13:00:30.533: INFO: Created: latency-svc-ps2rh May 12 13:00:30.545: INFO: Got endpoints: latency-svc-ps2rh [3.081087479s] May 12 13:00:30.618: INFO: Created: latency-svc-2p9ng May 12 13:00:30.629: INFO: Got endpoints: latency-svc-2p9ng [2.950478593s] May 12 13:00:30.678: INFO: Created: latency-svc-dt27d May 12 13:00:30.696: INFO: Got endpoints: latency-svc-dt27d [2.868403099s] May 12 13:00:30.773: INFO: Created: latency-svc-tpxbr May 12 13:00:30.780: INFO: Got endpoints: latency-svc-tpxbr [2.918848382s] May 12 13:00:30.805: INFO: Created: latency-svc-6lv2t May 12 13:00:30.822: INFO: Got endpoints: latency-svc-6lv2t [2.619254319s] May 12 13:00:30.846: INFO: Created: latency-svc-scdgv May 12 13:00:30.865: INFO: Got endpoints: latency-svc-scdgv [2.157643932s] May 12 13:00:30.941: INFO: Created: latency-svc-mfm4t May 12 13:00:30.943: INFO: Got endpoints: latency-svc-mfm4t [1.672882784s] May 12 13:00:31.007: INFO: Created: latency-svc-jr7ww May 12 13:00:31.096: INFO: Got endpoints: latency-svc-jr7ww [1.466764422s] May 12 13:00:31.122: INFO: Created: latency-svc-nbtw4 May 12 13:00:31.136: INFO: Got endpoints: latency-svc-nbtw4 [1.113194609s] May 12 13:00:31.176: INFO: Created: latency-svc-sxf78 May 12 13:00:31.191: INFO: Got endpoints: latency-svc-sxf78 [1.004908327s] May 12 13:00:31.283: INFO: Created: latency-svc-tp76z May 12 13:00:31.325: INFO: Got endpoints: latency-svc-tp76z [1.082042267s] May 12 13:00:31.450: INFO: Created: latency-svc-56fs5 May 12 13:00:31.467: INFO: Got endpoints: latency-svc-56fs5 [1.114302102s] May 12 13:00:31.660: INFO: Created: latency-svc-wldgh May 12 13:00:31.714: INFO: Got endpoints: latency-svc-wldgh [1.320156387s] May 12 13:00:31.822: INFO: Created: latency-svc-l6j98 May 12 13:00:31.825: INFO: Got endpoints: latency-svc-l6j98 [1.35128064s] May 12 13:00:32.133: INFO: Created: latency-svc-txkpc May 12 13:00:32.287: INFO: Got endpoints: latency-svc-txkpc [1.778109356s] May 12 13:00:32.344: INFO: Created: latency-svc-n7f4q May 12 13:00:32.348: INFO: Got endpoints: latency-svc-n7f4q [1.802921663s] May 12 13:00:32.426: INFO: Created: latency-svc-d4lxg May 12 13:00:32.429: INFO: Got endpoints: latency-svc-d4lxg [1.800197781s] May 12 13:00:32.498: INFO: Created: latency-svc-mmstj May 12 13:00:32.851: INFO: Got endpoints: latency-svc-mmstj [2.15519307s] May 12 13:00:32.856: INFO: Created: latency-svc-k9lk7 May 12 13:00:33.012: INFO: Got endpoints: latency-svc-k9lk7 [2.232541457s] May 12 13:00:33.049: INFO: Created: latency-svc-fvnkf May 12 13:00:33.099: INFO: Got endpoints: latency-svc-fvnkf [2.276560526s] May 12 13:00:33.232: INFO: Created: latency-svc-qc24b May 12 13:00:33.302: INFO: Got endpoints: latency-svc-qc24b [2.437018984s] May 12 13:00:33.422: INFO: Created: latency-svc-w5btl May 12 13:00:33.475: INFO: Got endpoints: latency-svc-w5btl [2.531777663s] May 12 13:00:33.555: INFO: Created: latency-svc-l5x82 May 12 13:00:33.579: INFO: Got endpoints: latency-svc-l5x82 [2.482925282s] May 12 13:00:33.626: INFO: Created: latency-svc-8wxdq May 12 13:00:33.732: INFO: Got endpoints: latency-svc-8wxdq [2.595858866s] May 12 13:00:33.735: INFO: Created: latency-svc-lmqv9 May 12 13:00:33.779: INFO: Got endpoints: latency-svc-lmqv9 [2.587214761s] May 12 13:00:34.206: INFO: Created: latency-svc-l82pk May 12 13:00:34.282: INFO: Created: latency-svc-g4wb8 May 12 13:00:34.285: INFO: Got endpoints: latency-svc-l82pk [2.960093053s] May 12 13:00:34.379: INFO: Got endpoints: latency-svc-g4wb8 [2.911756379s] May 12 13:00:34.599: INFO: Created: latency-svc-qs7qp May 12 13:00:34.640: INFO: Got endpoints: latency-svc-qs7qp [2.926300607s] May 12 13:00:34.713: INFO: Created: latency-svc-8s2m6 May 12 13:00:34.751: INFO: Got endpoints: latency-svc-8s2m6 [2.926048779s] May 12 13:00:34.857: INFO: Created: latency-svc-nmptl May 12 13:00:34.872: INFO: Got endpoints: latency-svc-nmptl [2.584334414s] May 12 13:00:34.911: INFO: Created: latency-svc-j8b67 May 12 13:00:34.932: INFO: Got endpoints: latency-svc-j8b67 [2.584156592s] May 12 13:00:35.056: INFO: Created: latency-svc-hqnlx May 12 13:00:35.103: INFO: Got endpoints: latency-svc-hqnlx [2.673680076s] May 12 13:00:35.104: INFO: Created: latency-svc-2c4f5 May 12 13:00:35.228: INFO: Got endpoints: latency-svc-2c4f5 [2.376781145s] May 12 13:00:35.283: INFO: Created: latency-svc-wb7kp May 12 13:00:35.327: INFO: Got endpoints: latency-svc-wb7kp [2.314168765s] May 12 13:00:35.420: INFO: Created: latency-svc-6b2lx May 12 13:00:35.471: INFO: Got endpoints: latency-svc-6b2lx [2.371578273s] May 12 13:00:35.563: INFO: Created: latency-svc-4xwls May 12 13:00:35.600: INFO: Got endpoints: latency-svc-4xwls [2.297991877s] May 12 13:00:35.633: INFO: Created: latency-svc-qjpsx May 12 13:00:35.793: INFO: Got endpoints: latency-svc-qjpsx [2.31814781s] May 12 13:00:35.799: INFO: Created: latency-svc-mxzc8 May 12 13:00:35.828: INFO: Got endpoints: latency-svc-mxzc8 [2.248695259s] May 12 13:00:35.933: INFO: Created: latency-svc-hvfjj May 12 13:00:35.942: INFO: Got endpoints: latency-svc-hvfjj [2.210213966s] May 12 13:00:35.962: INFO: Created: latency-svc-gmbml May 12 13:00:35.979: INFO: Got endpoints: latency-svc-gmbml [2.200083411s] May 12 13:00:36.004: INFO: Created: latency-svc-sdp7x May 12 13:00:36.015: INFO: Got endpoints: latency-svc-sdp7x [1.729202612s] May 12 13:00:36.079: INFO: Created: latency-svc-m2pvf May 12 13:00:36.100: INFO: Got endpoints: latency-svc-m2pvf [1.721187261s] May 12 13:00:36.136: INFO: Created: latency-svc-glml5 May 12 13:00:36.172: INFO: Got endpoints: latency-svc-glml5 [1.531708159s] May 12 13:00:36.229: INFO: Created: latency-svc-vdqwk May 12 13:00:36.236: INFO: Got endpoints: latency-svc-vdqwk [1.485318183s] May 12 13:00:36.305: INFO: Created: latency-svc-zvpww May 12 13:00:36.322: INFO: Got endpoints: latency-svc-zvpww [1.450751805s] May 12 13:00:36.366: INFO: Created: latency-svc-d67dp May 12 13:00:36.369: INFO: Got endpoints: latency-svc-d67dp [1.436258099s] May 12 13:00:36.394: INFO: Created: latency-svc-wnf4k May 12 13:00:36.425: INFO: Got endpoints: latency-svc-wnf4k [1.32142153s] May 12 13:00:36.461: INFO: Created: latency-svc-sckbk May 12 13:00:36.498: INFO: Got endpoints: latency-svc-sckbk [1.269345517s] May 12 13:00:36.526: INFO: Created: latency-svc-hvd4h May 12 13:00:36.559: INFO: Got endpoints: latency-svc-hvd4h [1.231878809s] May 12 13:00:36.591: INFO: Created: latency-svc-lhl58 May 12 13:00:36.624: INFO: Got endpoints: latency-svc-lhl58 [1.152861743s] May 12 13:00:36.639: INFO: Created: latency-svc-6s94k May 12 13:00:36.655: INFO: Got endpoints: latency-svc-6s94k [1.054673264s] May 12 13:00:36.684: INFO: Created: latency-svc-x7p99 May 12 13:00:36.698: INFO: Got endpoints: latency-svc-x7p99 [904.402621ms] May 12 13:00:36.767: INFO: Created: latency-svc-rcgqf May 12 13:00:36.773: INFO: Got endpoints: latency-svc-rcgqf [945.221425ms] May 12 13:00:36.802: INFO: Created: latency-svc-28v97 May 12 13:00:36.818: INFO: Got endpoints: latency-svc-28v97 [876.34017ms] May 12 13:00:36.843: INFO: Created: latency-svc-n7hcb May 12 13:00:36.860: INFO: Got endpoints: latency-svc-n7hcb [881.743677ms] May 12 13:00:36.929: INFO: Created: latency-svc-d4gbr May 12 13:00:36.933: INFO: Got endpoints: latency-svc-d4gbr [918.263105ms] May 12 13:00:37.025: INFO: Created: latency-svc-wpfg9 May 12 13:00:37.200: INFO: Got endpoints: latency-svc-wpfg9 [1.099703341s] May 12 13:00:37.265: INFO: Created: latency-svc-c2mrh May 12 13:00:37.294: INFO: Got endpoints: latency-svc-c2mrh [1.121715178s] May 12 13:00:37.356: INFO: Created: latency-svc-kxlrx May 12 13:00:37.384: INFO: Got endpoints: latency-svc-kxlrx [1.147490346s] May 12 13:00:37.415: INFO: Created: latency-svc-brpcm May 12 13:00:37.432: INFO: Got endpoints: latency-svc-brpcm [1.109422025s] May 12 13:00:37.492: INFO: Created: latency-svc-d7tz8 May 12 13:00:37.506: INFO: Got endpoints: latency-svc-d7tz8 [1.136694014s] May 12 13:00:37.533: INFO: Created: latency-svc-bgnq8 May 12 13:00:37.563: INFO: Got endpoints: latency-svc-bgnq8 [1.137977783s] May 12 13:00:37.632: INFO: Created: latency-svc-tppcx May 12 13:00:37.638: INFO: Got endpoints: latency-svc-tppcx [1.140562502s] May 12 13:00:37.666: INFO: Created: latency-svc-9d47p May 12 13:00:37.697: INFO: Got endpoints: latency-svc-9d47p [1.138865768s] May 12 13:00:37.698: INFO: Latencies: [149.008822ms 173.71598ms 224.358352ms 372.13561ms 423.013137ms 518.734502ms 570.938756ms 662.243991ms 753.6457ms 875.034112ms 875.245395ms 876.34017ms 879.480074ms 879.675329ms 881.743677ms 893.326836ms 904.402621ms 906.291109ms 912.977097ms 916.881557ms 918.263105ms 924.551174ms 945.221425ms 949.381595ms 971.714828ms 990.202488ms 998.984746ms 1.004908327s 1.04680448s 1.054673264s 1.070314673s 1.082042267s 1.085921841s 1.099703341s 1.109422025s 1.113194609s 1.114302102s 1.121715178s 1.124538172s 1.136694014s 1.137977783s 1.138865768s 1.140562502s 1.147490346s 1.150229588s 1.152861743s 1.159795396s 1.163685237s 1.167451325s 1.180123665s 1.180291555s 1.184207315s 1.184399758s 1.184564178s 1.194091851s 1.231878809s 1.236148527s 1.238377697s 1.238387214s 1.247434331s 1.259383566s 1.26516578s 1.269345517s 1.279711703s 1.288058867s 1.292360434s 1.294732627s 1.309887373s 1.314561663s 1.318836797s 1.320156387s 1.32142153s 1.328807049s 1.332871768s 1.335535347s 1.346711167s 1.349053882s 1.35128064s 1.351837228s 1.355211078s 1.358441754s 1.37733673s 1.377707333s 1.378729951s 1.380378197s 1.389860215s 1.392819276s 1.401016957s 1.406974775s 1.415325273s 1.423271563s 1.428586797s 1.430911162s 1.436258099s 1.43919677s 1.441518346s 1.450751805s 1.463817356s 1.465123851s 1.465728737s 1.466764422s 1.46960334s 1.48508925s 1.485318183s 1.486006869s 1.487186218s 1.500448009s 1.501388435s 1.503616605s 1.507596146s 1.511198523s 1.523891986s 1.531708159s 1.539587932s 1.545389274s 1.549638632s 1.551422206s 1.553684982s 1.571445286s 1.572005511s 1.593335194s 1.595347917s 1.597389232s 1.624298902s 1.637934348s 1.638371316s 1.638844308s 1.647043594s 1.652991276s 1.653032041s 1.653032572s 1.654350079s 1.656913365s 1.664997323s 1.666069236s 1.672882784s 1.676478998s 1.6948736s 1.700980292s 1.706873384s 1.721187261s 1.726061373s 1.729202612s 1.734457489s 1.774306026s 1.778109356s 1.800197781s 1.802921663s 1.826960471s 1.831485698s 1.837927736s 1.839384945s 1.842229229s 1.857909496s 1.861716516s 1.865808678s 1.868927924s 1.870922689s 1.875055631s 1.876782444s 1.92202247s 2.15519307s 2.157643932s 2.200083411s 2.210213966s 2.232541457s 2.248695259s 2.276560526s 2.297991877s 2.314168765s 2.31814781s 2.371578273s 2.376781145s 2.381288501s 2.437018984s 2.482925282s 2.531777663s 2.584156592s 2.584334414s 2.587214761s 2.595858866s 2.619254319s 2.673680076s 2.862657096s 2.868403099s 2.911756379s 2.918848382s 2.926048779s 2.926300607s 2.950478593s 2.960093053s 3.081087479s 3.157117598s 3.350709013s 3.423200828s 3.488413461s 3.500572161s 3.562220358s 3.576213493s 3.591110193s] May 12 13:00:37.698: INFO: 50 %ile: 1.466764422s May 12 13:00:37.698: INFO: 90 %ile: 2.595858866s May 12 13:00:37.698: INFO: 99 %ile: 3.576213493s May 12 13:00:37.698: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 12 13:00:37.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-8888" for this suite. • [SLOW TEST:26.410 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":275,"completed":128,"skipped":2044,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 12 13:00:37.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 12 13:00:37.849: INFO: (0) /api/v1/nodes/kali-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-cac65502-76f1-4424-83df-14bb60c7e3cf
STEP: Creating a pod to test consume configMaps
May 12 13:00:38.059: INFO: Waiting up to 5m0s for pod "pod-configmaps-9dd2a354-91cc-4299-9557-03c82e65adc2" in namespace "configmap-1776" to be "Succeeded or Failed"
May 12 13:00:38.078: INFO: Pod "pod-configmaps-9dd2a354-91cc-4299-9557-03c82e65adc2": Phase="Pending", Reason="", readiness=false. Elapsed: 19.84785ms
May 12 13:00:40.091: INFO: Pod "pod-configmaps-9dd2a354-91cc-4299-9557-03c82e65adc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032200214s
May 12 13:00:42.095: INFO: Pod "pod-configmaps-9dd2a354-91cc-4299-9557-03c82e65adc2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036753311s
May 12 13:00:44.111: INFO: Pod "pod-configmaps-9dd2a354-91cc-4299-9557-03c82e65adc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052017657s
STEP: Saw pod success
May 12 13:00:44.111: INFO: Pod "pod-configmaps-9dd2a354-91cc-4299-9557-03c82e65adc2" satisfied condition "Succeeded or Failed"
May 12 13:00:44.228: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-9dd2a354-91cc-4299-9557-03c82e65adc2 container configmap-volume-test: 
STEP: delete the pod
May 12 13:00:44.381: INFO: Waiting for pod pod-configmaps-9dd2a354-91cc-4299-9557-03c82e65adc2 to disappear
May 12 13:00:44.398: INFO: Pod pod-configmaps-9dd2a354-91cc-4299-9557-03c82e65adc2 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:00:44.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1776" for this suite.

• [SLOW TEST:6.492 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":130,"skipped":2111,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:00:44.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 12 13:00:44.638: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d1468236-2f11-401f-8c66-acdfd725cfd2" in namespace "projected-8377" to be "Succeeded or Failed"
May 12 13:00:44.761: INFO: Pod "downwardapi-volume-d1468236-2f11-401f-8c66-acdfd725cfd2": Phase="Pending", Reason="", readiness=false. Elapsed: 123.858158ms
May 12 13:00:47.003: INFO: Pod "downwardapi-volume-d1468236-2f11-401f-8c66-acdfd725cfd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.36523303s
May 12 13:00:49.098: INFO: Pod "downwardapi-volume-d1468236-2f11-401f-8c66-acdfd725cfd2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.460162008s
May 12 13:00:51.254: INFO: Pod "downwardapi-volume-d1468236-2f11-401f-8c66-acdfd725cfd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.616729156s
STEP: Saw pod success
May 12 13:00:51.254: INFO: Pod "downwardapi-volume-d1468236-2f11-401f-8c66-acdfd725cfd2" satisfied condition "Succeeded or Failed"
May 12 13:00:51.438: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-d1468236-2f11-401f-8c66-acdfd725cfd2 container client-container: 
STEP: delete the pod
May 12 13:00:51.815: INFO: Waiting for pod downwardapi-volume-d1468236-2f11-401f-8c66-acdfd725cfd2 to disappear
May 12 13:00:51.920: INFO: Pod downwardapi-volume-d1468236-2f11-401f-8c66-acdfd725cfd2 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:00:51.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8377" for this suite.

• [SLOW TEST:7.551 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":131,"skipped":2113,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:00:51.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
May 12 13:00:58.735: INFO: Successfully updated pod "pod-update-1f1c05ff-b0c5-43c7-a169-88cd8542d50d"
STEP: verifying the updated pod is in kubernetes
May 12 13:00:59.116: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:00:59.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2063" for this suite.

• [SLOW TEST:7.205 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":132,"skipped":2138,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:00:59.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 12 13:01:00.490: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 12 13:01:02.981: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724885260, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724885260, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724885260, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724885260, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 12 13:01:05.037: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724885260, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724885260, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724885260, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724885260, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 12 13:01:08.239: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
May 12 13:01:09.239: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
May 12 13:01:10.239: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
May 12 13:01:11.239: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
May 12 13:01:12.239: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
May 12 13:01:13.239: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
May 12 13:01:14.239: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
May 12 13:01:15.239: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:01:16.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7051" for this suite.
STEP: Destroying namespace "webhook-7051-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:18.170 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":133,"skipped":2164,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:01:17.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 12 13:01:18.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
May 12 13:01:19.879: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-12T13:01:19Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-12T13:01:19Z]] name:name1 resourceVersion:3730435 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:12a167c6-8df3-48cb-9456-34d2c88551d6] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
May 12 13:01:30.075: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-12T13:01:29Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-12T13:01:29Z]] name:name2 resourceVersion:3730579 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:f2034fdb-43df-43f6-99b1-2b88dba02c1a] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
May 12 13:01:40.082: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-12T13:01:19Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-12T13:01:40Z]] name:name1 resourceVersion:3730747 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:12a167c6-8df3-48cb-9456-34d2c88551d6] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
May 12 13:01:50.089: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-12T13:01:29Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-12T13:01:50Z]] name:name2 resourceVersion:3730777 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:f2034fdb-43df-43f6-99b1-2b88dba02c1a] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
May 12 13:02:00.098: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-12T13:01:19Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-12T13:01:40Z]] name:name1 resourceVersion:3730808 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:12a167c6-8df3-48cb-9456-34d2c88551d6] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
May 12 13:02:10.105: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-12T13:01:29Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-12T13:01:50Z]] name:name2 resourceVersion:3730838 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:f2034fdb-43df-43f6-99b1-2b88dba02c1a] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:02:21.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-173" for this suite.

• [SLOW TEST:63.896 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":134,"skipped":2172,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:02:21.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 12 13:02:21.535: INFO: Waiting up to 5m0s for pod "downwardapi-volume-76676b8f-2622-46b9-a391-bf47102ede32" in namespace "projected-6429" to be "Succeeded or Failed"
May 12 13:02:21.668: INFO: Pod "downwardapi-volume-76676b8f-2622-46b9-a391-bf47102ede32": Phase="Pending", Reason="", readiness=false. Elapsed: 133.908366ms
May 12 13:02:23.671: INFO: Pod "downwardapi-volume-76676b8f-2622-46b9-a391-bf47102ede32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13660828s
May 12 13:02:25.674: INFO: Pod "downwardapi-volume-76676b8f-2622-46b9-a391-bf47102ede32": Phase="Running", Reason="", readiness=true. Elapsed: 4.139699656s
May 12 13:02:27.677: INFO: Pod "downwardapi-volume-76676b8f-2622-46b9-a391-bf47102ede32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.142796779s
STEP: Saw pod success
May 12 13:02:27.677: INFO: Pod "downwardapi-volume-76676b8f-2622-46b9-a391-bf47102ede32" satisfied condition "Succeeded or Failed"
May 12 13:02:27.680: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-76676b8f-2622-46b9-a391-bf47102ede32 container client-container: 
STEP: delete the pod
May 12 13:02:27.742: INFO: Waiting for pod downwardapi-volume-76676b8f-2622-46b9-a391-bf47102ede32 to disappear
May 12 13:02:27.816: INFO: Pod downwardapi-volume-76676b8f-2622-46b9-a391-bf47102ede32 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:02:27.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6429" for this suite.

• [SLOW TEST:6.589 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":135,"skipped":2183,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:02:27.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override command
May 12 13:02:28.178: INFO: Waiting up to 5m0s for pod "client-containers-bf99754b-d926-4132-a787-5e6123e54986" in namespace "containers-370" to be "Succeeded or Failed"
May 12 13:02:28.225: INFO: Pod "client-containers-bf99754b-d926-4132-a787-5e6123e54986": Phase="Pending", Reason="", readiness=false. Elapsed: 47.291003ms
May 12 13:02:30.235: INFO: Pod "client-containers-bf99754b-d926-4132-a787-5e6123e54986": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057173496s
May 12 13:02:32.242: INFO: Pod "client-containers-bf99754b-d926-4132-a787-5e6123e54986": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064344309s
May 12 13:02:34.246: INFO: Pod "client-containers-bf99754b-d926-4132-a787-5e6123e54986": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.068305775s
STEP: Saw pod success
May 12 13:02:34.246: INFO: Pod "client-containers-bf99754b-d926-4132-a787-5e6123e54986" satisfied condition "Succeeded or Failed"
May 12 13:02:34.249: INFO: Trying to get logs from node kali-worker pod client-containers-bf99754b-d926-4132-a787-5e6123e54986 container test-container: 
STEP: delete the pod
May 12 13:02:34.305: INFO: Waiting for pod client-containers-bf99754b-d926-4132-a787-5e6123e54986 to disappear
May 12 13:02:34.333: INFO: Pod client-containers-bf99754b-d926-4132-a787-5e6123e54986 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:02:34.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-370" for this suite.

• [SLOW TEST:6.548 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":136,"skipped":2194,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:02:34.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-84ee6180-fe85-44e2-873d-189d22878669
STEP: Creating a pod to test consume secrets
May 12 13:02:34.443: INFO: Waiting up to 5m0s for pod "pod-secrets-bd2d3b07-616e-4906-b8b9-a0048d003014" in namespace "secrets-8671" to be "Succeeded or Failed"
May 12 13:02:34.458: INFO: Pod "pod-secrets-bd2d3b07-616e-4906-b8b9-a0048d003014": Phase="Pending", Reason="", readiness=false. Elapsed: 15.89221ms
May 12 13:02:36.595: INFO: Pod "pod-secrets-bd2d3b07-616e-4906-b8b9-a0048d003014": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152546126s
May 12 13:02:38.599: INFO: Pod "pod-secrets-bd2d3b07-616e-4906-b8b9-a0048d003014": Phase="Running", Reason="", readiness=true. Elapsed: 4.15595655s
May 12 13:02:40.602: INFO: Pod "pod-secrets-bd2d3b07-616e-4906-b8b9-a0048d003014": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.159824535s
STEP: Saw pod success
May 12 13:02:40.602: INFO: Pod "pod-secrets-bd2d3b07-616e-4906-b8b9-a0048d003014" satisfied condition "Succeeded or Failed"
May 12 13:02:40.605: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-bd2d3b07-616e-4906-b8b9-a0048d003014 container secret-volume-test: 
STEP: delete the pod
May 12 13:02:40.643: INFO: Waiting for pod pod-secrets-bd2d3b07-616e-4906-b8b9-a0048d003014 to disappear
May 12 13:02:40.658: INFO: Pod pod-secrets-bd2d3b07-616e-4906-b8b9-a0048d003014 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:02:40.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8671" for this suite.

• [SLOW TEST:6.294 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":137,"skipped":2201,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:02:40.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 12 13:02:40.774: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a8a56866-4745-42b6-bd2d-8d76f4157b6e" in namespace "downward-api-8292" to be "Succeeded or Failed"
May 12 13:02:40.801: INFO: Pod "downwardapi-volume-a8a56866-4745-42b6-bd2d-8d76f4157b6e": Phase="Pending", Reason="", readiness=false. Elapsed: 27.044484ms
May 12 13:02:42.804: INFO: Pod "downwardapi-volume-a8a56866-4745-42b6-bd2d-8d76f4157b6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029753687s
May 12 13:02:44.967: INFO: Pod "downwardapi-volume-a8a56866-4745-42b6-bd2d-8d76f4157b6e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.192841934s
May 12 13:02:46.997: INFO: Pod "downwardapi-volume-a8a56866-4745-42b6-bd2d-8d76f4157b6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.222551472s
STEP: Saw pod success
May 12 13:02:46.997: INFO: Pod "downwardapi-volume-a8a56866-4745-42b6-bd2d-8d76f4157b6e" satisfied condition "Succeeded or Failed"
May 12 13:02:47.018: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-a8a56866-4745-42b6-bd2d-8d76f4157b6e container client-container: 
STEP: delete the pod
May 12 13:02:47.074: INFO: Waiting for pod downwardapi-volume-a8a56866-4745-42b6-bd2d-8d76f4157b6e to disappear
May 12 13:02:47.144: INFO: Pod downwardapi-volume-a8a56866-4745-42b6-bd2d-8d76f4157b6e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:02:47.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8292" for this suite.

• [SLOW TEST:6.486 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":138,"skipped":2205,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:02:47.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-f669a8bb-864f-4ee5-8f8a-b0a758f9afbf
STEP: Creating a pod to test consume secrets
May 12 13:02:47.378: INFO: Waiting up to 5m0s for pod "pod-secrets-241d1cf4-a1f1-46c5-afd0-c21fba172757" in namespace "secrets-2033" to be "Succeeded or Failed"
May 12 13:02:47.434: INFO: Pod "pod-secrets-241d1cf4-a1f1-46c5-afd0-c21fba172757": Phase="Pending", Reason="", readiness=false. Elapsed: 55.972563ms
May 12 13:02:49.437: INFO: Pod "pod-secrets-241d1cf4-a1f1-46c5-afd0-c21fba172757": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059789574s
May 12 13:02:51.451: INFO: Pod "pod-secrets-241d1cf4-a1f1-46c5-afd0-c21fba172757": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073378865s
May 12 13:02:53.524: INFO: Pod "pod-secrets-241d1cf4-a1f1-46c5-afd0-c21fba172757": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.146331166s
STEP: Saw pod success
May 12 13:02:53.524: INFO: Pod "pod-secrets-241d1cf4-a1f1-46c5-afd0-c21fba172757" satisfied condition "Succeeded or Failed"
May 12 13:02:53.586: INFO: Trying to get logs from node kali-worker pod pod-secrets-241d1cf4-a1f1-46c5-afd0-c21fba172757 container secret-volume-test: 
STEP: delete the pod
May 12 13:02:53.810: INFO: Waiting for pod pod-secrets-241d1cf4-a1f1-46c5-afd0-c21fba172757 to disappear
May 12 13:02:53.912: INFO: Pod pod-secrets-241d1cf4-a1f1-46c5-afd0-c21fba172757 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:02:53.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2033" for this suite.

• [SLOW TEST:6.767 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":139,"skipped":2259,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:02:53.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
May 12 13:02:54.070: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4045'
May 12 13:02:58.267: INFO: stderr: ""
May 12 13:02:58.267: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
May 12 13:02:59.276: INFO: Selector matched 1 pods for map[app:agnhost]
May 12 13:02:59.276: INFO: Found 0 / 1
May 12 13:03:00.277: INFO: Selector matched 1 pods for map[app:agnhost]
May 12 13:03:00.277: INFO: Found 0 / 1
May 12 13:03:01.582: INFO: Selector matched 1 pods for map[app:agnhost]
May 12 13:03:01.582: INFO: Found 0 / 1
May 12 13:03:02.450: INFO: Selector matched 1 pods for map[app:agnhost]
May 12 13:03:02.450: INFO: Found 0 / 1
May 12 13:03:03.725: INFO: Selector matched 1 pods for map[app:agnhost]
May 12 13:03:03.725: INFO: Found 0 / 1
May 12 13:03:04.379: INFO: Selector matched 1 pods for map[app:agnhost]
May 12 13:03:04.379: INFO: Found 1 / 1
May 12 13:03:04.379: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
May 12 13:03:04.382: INFO: Selector matched 1 pods for map[app:agnhost]
May 12 13:03:04.382: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
May 12 13:03:04.382: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config patch pod agnhost-master-jw6ps --namespace=kubectl-4045 -p {"metadata":{"annotations":{"x":"y"}}}'
May 12 13:03:04.627: INFO: stderr: ""
May 12 13:03:04.627: INFO: stdout: "pod/agnhost-master-jw6ps patched\n"
STEP: checking annotations
May 12 13:03:04.682: INFO: Selector matched 1 pods for map[app:agnhost]
May 12 13:03:04.682: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:03:04.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4045" for this suite.

• [SLOW TEST:10.769 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":275,"completed":140,"skipped":2272,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:03:04.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:03:13.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6863" for this suite.

• [SLOW TEST:8.729 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":141,"skipped":2295,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:03:13.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
May 12 13:03:13.606: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:03:24.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-660" for this suite.

• [SLOW TEST:10.987 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":142,"skipped":2320,"failed":0}
SS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:03:24.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8833.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8833.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8833.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8833.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8833.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8833.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8833.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8833.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8833.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8833.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8833.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8833.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8833.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 67.219.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.219.67_udp@PTR;check="$$(dig +tcp +noall +answer +search 67.219.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.219.67_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8833.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8833.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8833.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8833.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8833.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8833.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8833.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8833.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8833.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8833.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8833.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8833.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8833.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 67.219.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.219.67_udp@PTR;check="$$(dig +tcp +noall +answer +search 67.219.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.219.67_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May 12 13:03:33.365: INFO: Unable to read wheezy_udp@dns-test-service.dns-8833.svc.cluster.local from pod dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1: the server could not find the requested resource (get pods dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1)
May 12 13:03:33.368: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8833.svc.cluster.local from pod dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1: the server could not find the requested resource (get pods dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1)
May 12 13:03:33.370: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8833.svc.cluster.local from pod dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1: the server could not find the requested resource (get pods dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1)
May 12 13:03:33.373: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8833.svc.cluster.local from pod dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1: the server could not find the requested resource (get pods dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1)
May 12 13:03:33.394: INFO: Unable to read jessie_udp@dns-test-service.dns-8833.svc.cluster.local from pod dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1: the server could not find the requested resource (get pods dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1)
May 12 13:03:33.396: INFO: Unable to read jessie_tcp@dns-test-service.dns-8833.svc.cluster.local from pod dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1: the server could not find the requested resource (get pods dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1)
May 12 13:03:33.399: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8833.svc.cluster.local from pod dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1: the server could not find the requested resource (get pods dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1)
May 12 13:03:33.402: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8833.svc.cluster.local from pod dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1: the server could not find the requested resource (get pods dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1)
May 12 13:03:33.419: INFO: Lookups using dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1 failed for: [wheezy_udp@dns-test-service.dns-8833.svc.cluster.local wheezy_tcp@dns-test-service.dns-8833.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8833.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8833.svc.cluster.local jessie_udp@dns-test-service.dns-8833.svc.cluster.local jessie_tcp@dns-test-service.dns-8833.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8833.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8833.svc.cluster.local]

May 12 13:03:38.440: INFO: Unable to read wheezy_udp@dns-test-service.dns-8833.svc.cluster.local from pod dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1: the server could not find the requested resource (get pods dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1)
May 12 13:03:38.482: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8833.svc.cluster.local from pod dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1: the server could not find the requested resource (get pods dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1)
May 12 13:03:38.686: INFO: Unable to read jessie_udp@dns-test-service.dns-8833.svc.cluster.local from pod dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1: the server could not find the requested resource (get pods dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1)
May 12 13:03:38.689: INFO: Unable to read jessie_tcp@dns-test-service.dns-8833.svc.cluster.local from pod dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1: the server could not find the requested resource (get pods dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1)
May 12 13:03:38.710: INFO: Lookups using dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1 failed for: [wheezy_udp@dns-test-service.dns-8833.svc.cluster.local wheezy_tcp@dns-test-service.dns-8833.svc.cluster.local jessie_udp@dns-test-service.dns-8833.svc.cluster.local jessie_tcp@dns-test-service.dns-8833.svc.cluster.local]

May 12 13:03:43.423: INFO: Unable to read wheezy_udp@dns-test-service.dns-8833.svc.cluster.local from pod dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1: the server could not find the requested resource (get pods dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1)
May 12 13:03:43.425: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8833.svc.cluster.local from pod dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1: the server could not find the requested resource (get pods dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1)
May 12 13:03:43.442: INFO: Unable to read jessie_udp@dns-test-service.dns-8833.svc.cluster.local from pod dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1: the server could not find the requested resource (get pods dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1)
May 12 13:03:43.444: INFO: Unable to read jessie_tcp@dns-test-service.dns-8833.svc.cluster.local from pod dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1: the server could not find the requested resource (get pods dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1)
May 12 13:03:43.460: INFO: Lookups using dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1 failed for: [wheezy_udp@dns-test-service.dns-8833.svc.cluster.local wheezy_tcp@dns-test-service.dns-8833.svc.cluster.local jessie_udp@dns-test-service.dns-8833.svc.cluster.local jessie_tcp@dns-test-service.dns-8833.svc.cluster.local]

May 12 13:03:48.428: INFO: Unable to read wheezy_udp@dns-test-service.dns-8833.svc.cluster.local from pod dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1: the server could not find the requested resource (get pods dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1)
May 12 13:03:48.431: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8833.svc.cluster.local from pod dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1: the server could not find the requested resource (get pods dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1)
May 12 13:03:48.454: INFO: Unable to read jessie_udp@dns-test-service.dns-8833.svc.cluster.local from pod dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1: the server could not find the requested resource (get pods dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1)
May 12 13:03:48.457: INFO: Unable to read jessie_tcp@dns-test-service.dns-8833.svc.cluster.local from pod dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1: the server could not find the requested resource (get pods dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1)
May 12 13:03:48.478: INFO: Lookups using dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1 failed for: [wheezy_udp@dns-test-service.dns-8833.svc.cluster.local wheezy_tcp@dns-test-service.dns-8833.svc.cluster.local jessie_udp@dns-test-service.dns-8833.svc.cluster.local jessie_tcp@dns-test-service.dns-8833.svc.cluster.local]

May 12 13:03:53.423: INFO: Unable to read wheezy_udp@dns-test-service.dns-8833.svc.cluster.local from pod dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1: the server could not find the requested resource (get pods dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1)
May 12 13:03:53.427: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8833.svc.cluster.local from pod dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1: the server could not find the requested resource (get pods dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1)
May 12 13:03:53.452: INFO: Unable to read jessie_udp@dns-test-service.dns-8833.svc.cluster.local from pod dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1: the server could not find the requested resource (get pods dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1)
May 12 13:03:53.455: INFO: Unable to read jessie_tcp@dns-test-service.dns-8833.svc.cluster.local from pod dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1: the server could not find the requested resource (get pods dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1)
May 12 13:03:53.475: INFO: Lookups using dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1 failed for: [wheezy_udp@dns-test-service.dns-8833.svc.cluster.local wheezy_tcp@dns-test-service.dns-8833.svc.cluster.local jessie_udp@dns-test-service.dns-8833.svc.cluster.local jessie_tcp@dns-test-service.dns-8833.svc.cluster.local]

May 12 13:03:58.424: INFO: Unable to read wheezy_udp@dns-test-service.dns-8833.svc.cluster.local from pod dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1: the server could not find the requested resource (get pods dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1)
May 12 13:03:58.429: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8833.svc.cluster.local from pod dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1: the server could not find the requested resource (get pods dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1)
May 12 13:03:58.453: INFO: Unable to read jessie_udp@dns-test-service.dns-8833.svc.cluster.local from pod dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1: the server could not find the requested resource (get pods dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1)
May 12 13:03:58.455: INFO: Unable to read jessie_tcp@dns-test-service.dns-8833.svc.cluster.local from pod dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1: the server could not find the requested resource (get pods dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1)
May 12 13:03:58.494: INFO: Lookups using dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1 failed for: [wheezy_udp@dns-test-service.dns-8833.svc.cluster.local wheezy_tcp@dns-test-service.dns-8833.svc.cluster.local jessie_udp@dns-test-service.dns-8833.svc.cluster.local jessie_tcp@dns-test-service.dns-8833.svc.cluster.local]

May 12 13:04:03.498: INFO: DNS probes using dns-8833/dns-test-7df7cc6d-bee3-43f4-9b99-2a14996b56d1 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:04:04.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8833" for this suite.

• [SLOW TEST:40.329 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":275,"completed":143,"skipped":2322,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:04:04.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:04:04.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6507" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":275,"completed":144,"skipped":2362,"failed":0}
SS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:04:04.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-1373/configmap-test-d98f939c-ed96-475d-833e-717031ff60d4
STEP: Creating a pod to test consume configMaps
May 12 13:04:05.014: INFO: Waiting up to 5m0s for pod "pod-configmaps-de1b0e87-5e17-4589-bea3-161885b870f7" in namespace "configmap-1373" to be "Succeeded or Failed"
May 12 13:04:05.033: INFO: Pod "pod-configmaps-de1b0e87-5e17-4589-bea3-161885b870f7": Phase="Pending", Reason="", readiness=false. Elapsed: 19.115571ms
May 12 13:04:07.037: INFO: Pod "pod-configmaps-de1b0e87-5e17-4589-bea3-161885b870f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02298246s
May 12 13:04:09.051: INFO: Pod "pod-configmaps-de1b0e87-5e17-4589-bea3-161885b870f7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03703955s
May 12 13:04:11.055: INFO: Pod "pod-configmaps-de1b0e87-5e17-4589-bea3-161885b870f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.04134742s
STEP: Saw pod success
May 12 13:04:11.055: INFO: Pod "pod-configmaps-de1b0e87-5e17-4589-bea3-161885b870f7" satisfied condition "Succeeded or Failed"
May 12 13:04:11.059: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-de1b0e87-5e17-4589-bea3-161885b870f7 container env-test: 
STEP: delete the pod
May 12 13:04:11.100: INFO: Waiting for pod pod-configmaps-de1b0e87-5e17-4589-bea3-161885b870f7 to disappear
May 12 13:04:11.171: INFO: Pod pod-configmaps-de1b0e87-5e17-4589-bea3-161885b870f7 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:04:11.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1373" for this suite.

• [SLOW TEST:6.272 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":145,"skipped":2364,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:04:11.197: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:04:31.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9000" for this suite.

• [SLOW TEST:20.443 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":146,"skipped":2385,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:04:31.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 12 13:04:31.920: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6492e0a0-6c47-4728-8777-39857e3dca29" in namespace "downward-api-3240" to be "Succeeded or Failed"
May 12 13:04:32.008: INFO: Pod "downwardapi-volume-6492e0a0-6c47-4728-8777-39857e3dca29": Phase="Pending", Reason="", readiness=false. Elapsed: 87.735135ms
May 12 13:04:34.387: INFO: Pod "downwardapi-volume-6492e0a0-6c47-4728-8777-39857e3dca29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.467330693s
May 12 13:04:36.477: INFO: Pod "downwardapi-volume-6492e0a0-6c47-4728-8777-39857e3dca29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.556565705s
May 12 13:04:38.838: INFO: Pod "downwardapi-volume-6492e0a0-6c47-4728-8777-39857e3dca29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.918088993s
STEP: Saw pod success
May 12 13:04:38.838: INFO: Pod "downwardapi-volume-6492e0a0-6c47-4728-8777-39857e3dca29" satisfied condition "Succeeded or Failed"
May 12 13:04:38.888: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-6492e0a0-6c47-4728-8777-39857e3dca29 container client-container: 
STEP: delete the pod
May 12 13:04:39.092: INFO: Waiting for pod downwardapi-volume-6492e0a0-6c47-4728-8777-39857e3dca29 to disappear
May 12 13:04:39.098: INFO: Pod downwardapi-volume-6492e0a0-6c47-4728-8777-39857e3dca29 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:04:39.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3240" for this suite.

• [SLOW TEST:7.469 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":147,"skipped":2435,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:04:39.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:04:45.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3650" for this suite.

• [SLOW TEST:6.239 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":148,"skipped":2475,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:04:45.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288
STEP: creating an pod
May 12 13:04:45.412: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-435 -- logs-generator --log-lines-total 100 --run-duration 20s'
May 12 13:04:45.521: INFO: stderr: ""
May 12 13:04:45.522: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Waiting for log generator to start.
May 12 13:04:45.522: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
May 12 13:04:45.522: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-435" to be "running and ready, or succeeded"
May 12 13:04:45.590: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 68.810279ms
May 12 13:04:47.594: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072560869s
May 12 13:04:49.599: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.077265383s
May 12 13:04:49.599: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
May 12 13:04:49.599: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
May 12 13:04:49.599: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-435'
May 12 13:04:49.831: INFO: stderr: ""
May 12 13:04:49.831: INFO: stdout: "I0512 13:04:48.203758       1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/rpg 524\nI0512 13:04:48.403956       1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/twwz 445\nI0512 13:04:48.603940       1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/m8tt 399\nI0512 13:04:48.803921       1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/mnh5 409\nI0512 13:04:49.003913       1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/24p 254\nI0512 13:04:49.203924       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/jgds 437\nI0512 13:04:49.404011       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/b9s5 207\nI0512 13:04:49.603994       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/vqpp 276\nI0512 13:04:49.803933       1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/v8tr 561\n"
STEP: limiting log lines
May 12 13:04:49.831: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-435 --tail=1'
May 12 13:04:49.935: INFO: stderr: ""
May 12 13:04:49.935: INFO: stdout: "I0512 13:04:49.803933       1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/v8tr 561\n"
May 12 13:04:49.935: INFO: got output "I0512 13:04:49.803933       1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/v8tr 561\n"
STEP: limiting log bytes
May 12 13:04:49.936: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-435 --limit-bytes=1'
May 12 13:04:50.046: INFO: stderr: ""
May 12 13:04:50.046: INFO: stdout: "I"
May 12 13:04:50.046: INFO: got output "I"
STEP: exposing timestamps
May 12 13:04:50.047: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-435 --tail=1 --timestamps'
May 12 13:04:50.165: INFO: stderr: ""
May 12 13:04:50.165: INFO: stdout: "2020-05-12T13:04:50.004049538Z I0512 13:04:50.003888       1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/wc4r 416\n"
May 12 13:04:50.165: INFO: got output "2020-05-12T13:04:50.004049538Z I0512 13:04:50.003888       1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/wc4r 416\n"
STEP: restricting to a time range
May 12 13:04:52.665: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-435 --since=1s'
May 12 13:04:52.779: INFO: stderr: ""
May 12 13:04:52.779: INFO: stdout: "I0512 13:04:51.803954       1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/5sg 284\nI0512 13:04:52.003903       1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/684 306\nI0512 13:04:52.203921       1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/mdw 532\nI0512 13:04:52.403922       1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/bfrn 598\nI0512 13:04:52.603909       1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/vph7 257\n"
May 12 13:04:52.779: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-435 --since=24h'
May 12 13:04:52.873: INFO: stderr: ""
May 12 13:04:52.874: INFO: stdout: "I0512 13:04:48.203758       1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/rpg 524\nI0512 13:04:48.403956       1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/twwz 445\nI0512 13:04:48.603940       1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/m8tt 399\nI0512 13:04:48.803921       1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/mnh5 409\nI0512 13:04:49.003913       1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/24p 254\nI0512 13:04:49.203924       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/jgds 437\nI0512 13:04:49.404011       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/b9s5 207\nI0512 13:04:49.603994       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/vqpp 276\nI0512 13:04:49.803933       1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/v8tr 561\nI0512 13:04:50.003888       1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/wc4r 416\nI0512 13:04:50.203877       1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/d5nl 330\nI0512 13:04:50.403953       1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/njsc 279\nI0512 13:04:50.603900       1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/w25 313\nI0512 13:04:50.803994       1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/5jv 386\nI0512 13:04:51.003953       1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/pxr 413\nI0512 13:04:51.203899       1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/rkp 283\nI0512 13:04:51.403943       1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/qxzk 589\nI0512 13:04:51.603921       1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/rclf 405\nI0512 13:04:51.803954       1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/5sg 284\nI0512 13:04:52.003903       1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/684 306\nI0512 13:04:52.203921       1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/mdw 532\nI0512 13:04:52.403922       1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/bfrn 598\nI0512 13:04:52.603909       1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/vph7 257\nI0512 13:04:52.803904       1 logs_generator.go:76] 23 GET /api/v1/namespaces/ns/pods/9d2t 218\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294
May 12 13:04:52.874: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-435'
May 12 13:05:03.728: INFO: stderr: ""
May 12 13:05:03.728: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:05:03.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-435" for this suite.

• [SLOW TEST:18.395 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":275,"completed":149,"skipped":2476,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:05:03.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0512 13:05:44.906076       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May 12 13:05:44.906: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:05:44.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9566" for this suite.

• [SLOW TEST:41.169 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":150,"skipped":2480,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:05:44.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's command
May 12 13:05:45.312: INFO: Waiting up to 5m0s for pod "var-expansion-7ef54532-13a7-47de-84a9-c1505bb568cd" in namespace "var-expansion-390" to be "Succeeded or Failed"
May 12 13:05:45.411: INFO: Pod "var-expansion-7ef54532-13a7-47de-84a9-c1505bb568cd": Phase="Pending", Reason="", readiness=false. Elapsed: 99.034034ms
May 12 13:05:47.471: INFO: Pod "var-expansion-7ef54532-13a7-47de-84a9-c1505bb568cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1588747s
May 12 13:05:49.475: INFO: Pod "var-expansion-7ef54532-13a7-47de-84a9-c1505bb568cd": Phase="Running", Reason="", readiness=true. Elapsed: 4.162641184s
May 12 13:05:51.513: INFO: Pod "var-expansion-7ef54532-13a7-47de-84a9-c1505bb568cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.201129252s
STEP: Saw pod success
May 12 13:05:51.513: INFO: Pod "var-expansion-7ef54532-13a7-47de-84a9-c1505bb568cd" satisfied condition "Succeeded or Failed"
May 12 13:05:51.663: INFO: Trying to get logs from node kali-worker pod var-expansion-7ef54532-13a7-47de-84a9-c1505bb568cd container dapi-container: 
STEP: delete the pod
May 12 13:05:53.270: INFO: Waiting for pod var-expansion-7ef54532-13a7-47de-84a9-c1505bb568cd to disappear
May 12 13:05:53.729: INFO: Pod var-expansion-7ef54532-13a7-47de-84a9-c1505bb568cd no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:05:53.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-390" for this suite.

• [SLOW TEST:9.410 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":151,"skipped":2526,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] LimitRange
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:05:54.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename limitrange
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a LimitRange
STEP: Setting up watch
STEP: Submitting a LimitRange
May 12 13:05:56.014: INFO: observed the limitRanges list
STEP: Verifying LimitRange creation was observed
STEP: Fetching the LimitRange to ensure it has proper values
May 12 13:05:56.280: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
May 12 13:05:56.280: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with no resource requirements
STEP: Ensuring Pod has resource requirements applied from LimitRange
May 12 13:05:56.562: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
May 12 13:05:56.562: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with partial resource requirements
STEP: Ensuring Pod has merged resource requirements applied from LimitRange
May 12 13:05:56.832: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}]
May 12 13:05:56.832: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Failing to create a Pod with less than min resources
STEP: Failing to create a Pod with more than max resources
STEP: Updating a LimitRange
STEP: Verifying LimitRange updating is effective
STEP: Creating a Pod with less than former min resources
STEP: Failing to create a Pod with more than max resources
STEP: Deleting a LimitRange
STEP: Verifying the LimitRange was deleted
May 12 13:06:05.373: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:06:05.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-9332" for this suite.

• [SLOW TEST:11.252 seconds]
[sig-scheduling] LimitRange
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":152,"skipped":2554,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:06:05.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 12 13:06:05.756: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:06:06.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-154" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":275,"completed":153,"skipped":2567,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:06:06.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 12 13:06:06.996: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
May 12 13:06:07.064: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:06:07.166: INFO: Number of nodes with available pods: 0
May 12 13:06:07.166: INFO: Node kali-worker is running more than one daemon pod
May 12 13:06:08.359: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:06:08.362: INFO: Number of nodes with available pods: 0
May 12 13:06:08.362: INFO: Node kali-worker is running more than one daemon pod
May 12 13:06:09.412: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:06:09.415: INFO: Number of nodes with available pods: 0
May 12 13:06:09.415: INFO: Node kali-worker is running more than one daemon pod
May 12 13:06:10.190: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:06:10.193: INFO: Number of nodes with available pods: 0
May 12 13:06:10.193: INFO: Node kali-worker is running more than one daemon pod
May 12 13:06:11.276: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:06:11.558: INFO: Number of nodes with available pods: 0
May 12 13:06:11.558: INFO: Node kali-worker is running more than one daemon pod
May 12 13:06:12.353: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:06:12.443: INFO: Number of nodes with available pods: 0
May 12 13:06:12.443: INFO: Node kali-worker is running more than one daemon pod
May 12 13:06:13.408: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:06:13.833: INFO: Number of nodes with available pods: 2
May 12 13:06:13.833: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
May 12 13:06:14.950: INFO: Wrong image for pod: daemon-set-cvxgp. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 12 13:06:14.950: INFO: Wrong image for pod: daemon-set-hkh57. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 12 13:06:14.984: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:06:16.002: INFO: Wrong image for pod: daemon-set-cvxgp. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 12 13:06:16.002: INFO: Wrong image for pod: daemon-set-hkh57. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 12 13:06:16.283: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:06:16.987: INFO: Wrong image for pod: daemon-set-cvxgp. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 12 13:06:16.987: INFO: Wrong image for pod: daemon-set-hkh57. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 12 13:06:16.989: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:06:17.988: INFO: Wrong image for pod: daemon-set-cvxgp. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 12 13:06:17.988: INFO: Pod daemon-set-cvxgp is not available
May 12 13:06:17.988: INFO: Wrong image for pod: daemon-set-hkh57. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 12 13:06:17.991: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:06:19.083: INFO: Wrong image for pod: daemon-set-cvxgp. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 12 13:06:19.083: INFO: Pod daemon-set-cvxgp is not available
May 12 13:06:19.083: INFO: Wrong image for pod: daemon-set-hkh57. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 12 13:06:19.111: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:06:19.988: INFO: Wrong image for pod: daemon-set-cvxgp. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 12 13:06:19.988: INFO: Pod daemon-set-cvxgp is not available
May 12 13:06:19.988: INFO: Wrong image for pod: daemon-set-hkh57. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 12 13:06:19.992: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:06:20.989: INFO: Wrong image for pod: daemon-set-cvxgp. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 12 13:06:20.989: INFO: Pod daemon-set-cvxgp is not available
May 12 13:06:20.989: INFO: Wrong image for pod: daemon-set-hkh57. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 12 13:06:20.994: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:06:21.989: INFO: Wrong image for pod: daemon-set-cvxgp. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 12 13:06:21.989: INFO: Pod daemon-set-cvxgp is not available
May 12 13:06:21.989: INFO: Wrong image for pod: daemon-set-hkh57. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 12 13:06:21.994: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:06:23.203: INFO: Wrong image for pod: daemon-set-cvxgp. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 12 13:06:23.204: INFO: Pod daemon-set-cvxgp is not available
May 12 13:06:23.204: INFO: Wrong image for pod: daemon-set-hkh57. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 12 13:06:23.263: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:06:23.987: INFO: Wrong image for pod: daemon-set-hkh57. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 12 13:06:23.987: INFO: Pod daemon-set-vzrcr is not available
May 12 13:06:23.990: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:06:25.005: INFO: Wrong image for pod: daemon-set-hkh57. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 12 13:06:25.005: INFO: Pod daemon-set-vzrcr is not available
May 12 13:06:25.009: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:06:26.017: INFO: Wrong image for pod: daemon-set-hkh57. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 12 13:06:26.017: INFO: Pod daemon-set-vzrcr is not available
May 12 13:06:26.021: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:06:26.988: INFO: Wrong image for pod: daemon-set-hkh57. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 12 13:06:26.988: INFO: Pod daemon-set-vzrcr is not available
May 12 13:06:26.992: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:06:27.988: INFO: Wrong image for pod: daemon-set-hkh57. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 12 13:06:28.036: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:06:28.988: INFO: Wrong image for pod: daemon-set-hkh57. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 12 13:06:28.992: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:06:29.987: INFO: Wrong image for pod: daemon-set-hkh57. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May 12 13:06:29.987: INFO: Pod daemon-set-hkh57 is not available
May 12 13:06:29.990: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:06:30.988: INFO: Pod daemon-set-tvdng is not available
May 12 13:06:30.991: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
May 12 13:06:30.994: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:06:30.997: INFO: Number of nodes with available pods: 1
May 12 13:06:30.997: INFO: Node kali-worker is running more than one daemon pod
May 12 13:06:32.431: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:06:32.435: INFO: Number of nodes with available pods: 1
May 12 13:06:32.435: INFO: Node kali-worker is running more than one daemon pod
May 12 13:06:33.084: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:06:33.122: INFO: Number of nodes with available pods: 1
May 12 13:06:33.122: INFO: Node kali-worker is running more than one daemon pod
May 12 13:06:34.042: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:06:34.046: INFO: Number of nodes with available pods: 1
May 12 13:06:34.046: INFO: Node kali-worker is running more than one daemon pod
May 12 13:06:35.002: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:06:35.006: INFO: Number of nodes with available pods: 2
May 12 13:06:35.006: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6028, will wait for the garbage collector to delete the pods
May 12 13:06:35.079: INFO: Deleting DaemonSet.extensions daemon-set took: 6.322689ms
May 12 13:06:35.379: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.207174ms
May 12 13:06:43.783: INFO: Number of nodes with available pods: 0
May 12 13:06:43.783: INFO: Number of running nodes: 0, number of available pods: 0
May 12 13:06:43.785: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6028/daemonsets","resourceVersion":"3732469"},"items":null}

May 12 13:06:43.788: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6028/pods","resourceVersion":"3732469"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:06:43.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6028" for this suite.

• [SLOW TEST:37.387 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":154,"skipped":2590,"failed":0}
SSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:06:43.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
May 12 13:06:48.714: INFO: Successfully updated pod "annotationupdatececc6046-bc5c-4d10-8964-78fd1e62e6c2"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:06:52.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8082" for this suite.

• [SLOW TEST:9.033 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":155,"skipped":2593,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:06:52.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-1046
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-1046
STEP: creating replication controller externalsvc in namespace services-1046
I0512 13:06:53.151574       7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-1046, replica count: 2
I0512 13:06:56.201971       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0512 13:06:59.202133       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0512 13:07:02.202318       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
May 12 13:07:02.485: INFO: Creating new exec pod
May 12 13:07:06.643: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-1046 execpodfvr64 -- /bin/sh -x -c nslookup clusterip-service'
May 12 13:07:06.857: INFO: stderr: "I0512 13:07:06.775904    1899 log.go:172] (0xc000b9f080) (0xc0009a4820) Create stream\nI0512 13:07:06.775963    1899 log.go:172] (0xc000b9f080) (0xc0009a4820) Stream added, broadcasting: 1\nI0512 13:07:06.781374    1899 log.go:172] (0xc000b9f080) Reply frame received for 1\nI0512 13:07:06.781449    1899 log.go:172] (0xc000b9f080) (0xc0006a75e0) Create stream\nI0512 13:07:06.781469    1899 log.go:172] (0xc000b9f080) (0xc0006a75e0) Stream added, broadcasting: 3\nI0512 13:07:06.782476    1899 log.go:172] (0xc000b9f080) Reply frame received for 3\nI0512 13:07:06.782516    1899 log.go:172] (0xc000b9f080) (0xc000568a00) Create stream\nI0512 13:07:06.782529    1899 log.go:172] (0xc000b9f080) (0xc000568a00) Stream added, broadcasting: 5\nI0512 13:07:06.783453    1899 log.go:172] (0xc000b9f080) Reply frame received for 5\nI0512 13:07:06.842886    1899 log.go:172] (0xc000b9f080) Data frame received for 5\nI0512 13:07:06.842927    1899 log.go:172] (0xc000568a00) (5) Data frame handling\nI0512 13:07:06.842952    1899 log.go:172] (0xc000568a00) (5) Data frame sent\n+ nslookup clusterip-service\nI0512 13:07:06.850163    1899 log.go:172] (0xc000b9f080) Data frame received for 3\nI0512 13:07:06.850188    1899 log.go:172] (0xc0006a75e0) (3) Data frame handling\nI0512 13:07:06.850206    1899 log.go:172] (0xc0006a75e0) (3) Data frame sent\nI0512 13:07:06.851075    1899 log.go:172] (0xc000b9f080) Data frame received for 3\nI0512 13:07:06.851099    1899 log.go:172] (0xc0006a75e0) (3) Data frame handling\nI0512 13:07:06.851121    1899 log.go:172] (0xc0006a75e0) (3) Data frame sent\nI0512 13:07:06.851598    1899 log.go:172] (0xc000b9f080) Data frame received for 3\nI0512 13:07:06.851630    1899 log.go:172] (0xc0006a75e0) (3) Data frame handling\nI0512 13:07:06.851661    1899 log.go:172] (0xc000b9f080) Data frame received for 5\nI0512 13:07:06.851675    1899 log.go:172] (0xc000568a00) (5) Data frame handling\nI0512 13:07:06.853408    1899 log.go:172] (0xc000b9f080) Data frame received for 1\nI0512 13:07:06.853428    1899 log.go:172] (0xc0009a4820) (1) Data frame handling\nI0512 13:07:06.853444    1899 log.go:172] (0xc0009a4820) (1) Data frame sent\nI0512 13:07:06.853453    1899 log.go:172] (0xc000b9f080) (0xc0009a4820) Stream removed, broadcasting: 1\nI0512 13:07:06.853672    1899 log.go:172] (0xc000b9f080) Go away received\nI0512 13:07:06.853724    1899 log.go:172] (0xc000b9f080) (0xc0009a4820) Stream removed, broadcasting: 1\nI0512 13:07:06.853740    1899 log.go:172] (0xc000b9f080) (0xc0006a75e0) Stream removed, broadcasting: 3\nI0512 13:07:06.853748    1899 log.go:172] (0xc000b9f080) (0xc000568a00) Stream removed, broadcasting: 5\n"
May 12 13:07:06.857: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-1046.svc.cluster.local\tcanonical name = externalsvc.services-1046.svc.cluster.local.\nName:\texternalsvc.services-1046.svc.cluster.local\nAddress: 10.99.145.181\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-1046, will wait for the garbage collector to delete the pods
May 12 13:07:06.917: INFO: Deleting ReplicationController externalsvc took: 7.179179ms
May 12 13:07:07.317: INFO: Terminating ReplicationController externalsvc pods took: 400.23152ms
May 12 13:07:13.819: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:07:13.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1046" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:21.011 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":156,"skipped":2602,"failed":0}
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:07:13.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 12 13:07:13.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
May 12 13:07:16.827: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3080 create -f -'
May 12 13:07:22.187: INFO: stderr: ""
May 12 13:07:22.187: INFO: stdout: "e2e-test-crd-publish-openapi-3212-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
May 12 13:07:22.188: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3080 delete e2e-test-crd-publish-openapi-3212-crds test-foo'
May 12 13:07:22.300: INFO: stderr: ""
May 12 13:07:22.300: INFO: stdout: "e2e-test-crd-publish-openapi-3212-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
May 12 13:07:22.300: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3080 apply -f -'
May 12 13:07:22.532: INFO: stderr: ""
May 12 13:07:22.532: INFO: stdout: "e2e-test-crd-publish-openapi-3212-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
May 12 13:07:22.532: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3080 delete e2e-test-crd-publish-openapi-3212-crds test-foo'
May 12 13:07:22.652: INFO: stderr: ""
May 12 13:07:22.652: INFO: stdout: "e2e-test-crd-publish-openapi-3212-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
May 12 13:07:22.652: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3080 create -f -'
May 12 13:07:22.895: INFO: rc: 1
May 12 13:07:22.895: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3080 apply -f -'
May 12 13:07:23.129: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
May 12 13:07:23.129: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3080 create -f -'
May 12 13:07:23.375: INFO: rc: 1
May 12 13:07:23.375: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3080 apply -f -'
May 12 13:07:23.680: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
May 12 13:07:23.680: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3212-crds'
May 12 13:07:23.938: INFO: stderr: ""
May 12 13:07:23.938: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3212-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
May 12 13:07:23.939: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3212-crds.metadata'
May 12 13:07:24.222: INFO: stderr: ""
May 12 13:07:24.222: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3212-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
May 12 13:07:24.222: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3212-crds.spec'
May 12 13:07:24.469: INFO: stderr: ""
May 12 13:07:24.469: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3212-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
May 12 13:07:24.469: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3212-crds.spec.bars'
May 12 13:07:24.725: INFO: stderr: ""
May 12 13:07:24.726: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3212-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
May 12 13:07:24.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3212-crds.spec.bars2'
May 12 13:07:24.962: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:07:26.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3080" for this suite.

• [SLOW TEST:13.053 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":157,"skipped":2602,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:07:26.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 12 13:07:27.997: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 12 13:07:30.015: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724885648, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724885648, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724885648, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724885647, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 12 13:07:32.017: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724885648, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724885648, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724885648, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724885647, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 12 13:07:35.103: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:07:35.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7475" for this suite.
STEP: Destroying namespace "webhook-7475-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.447 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":158,"skipped":2604,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:07:35.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:07:51.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-203" for this suite.

• [SLOW TEST:16.130 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":159,"skipped":2629,"failed":0}
S
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:07:51.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test hostPath mode
May 12 13:07:51.572: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5490" to be "Succeeded or Failed"
May 12 13:07:51.590: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 17.294122ms
May 12 13:07:53.623: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050668878s
May 12 13:07:55.627: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054541177s
May 12 13:07:57.802: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.230172575s
May 12 13:07:59.807: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.234864316s
STEP: Saw pod success
May 12 13:07:59.807: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
May 12 13:07:59.811: INFO: Trying to get logs from node kali-worker2 pod pod-host-path-test container test-container-1: 
STEP: delete the pod
May 12 13:08:00.015: INFO: Waiting for pod pod-host-path-test to disappear
May 12 13:08:00.221: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:08:00.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-5490" for this suite.

• [SLOW TEST:8.932 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":160,"skipped":2630,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:08:00.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
May 12 13:08:00.474: INFO: Waiting up to 5m0s for pod "pod-d809c1c1-4666-41d7-a50a-fa2a169ae842" in namespace "emptydir-6635" to be "Succeeded or Failed"
May 12 13:08:00.493: INFO: Pod "pod-d809c1c1-4666-41d7-a50a-fa2a169ae842": Phase="Pending", Reason="", readiness=false. Elapsed: 18.994249ms
May 12 13:08:02.790: INFO: Pod "pod-d809c1c1-4666-41d7-a50a-fa2a169ae842": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316339416s
May 12 13:08:04.793: INFO: Pod "pod-d809c1c1-4666-41d7-a50a-fa2a169ae842": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319509237s
May 12 13:08:06.797: INFO: Pod "pod-d809c1c1-4666-41d7-a50a-fa2a169ae842": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.32337144s
STEP: Saw pod success
May 12 13:08:06.797: INFO: Pod "pod-d809c1c1-4666-41d7-a50a-fa2a169ae842" satisfied condition "Succeeded or Failed"
May 12 13:08:06.800: INFO: Trying to get logs from node kali-worker pod pod-d809c1c1-4666-41d7-a50a-fa2a169ae842 container test-container: 
STEP: delete the pod
May 12 13:08:06.842: INFO: Waiting for pod pod-d809c1c1-4666-41d7-a50a-fa2a169ae842 to disappear
May 12 13:08:06.852: INFO: Pod pod-d809c1c1-4666-41d7-a50a-fa2a169ae842 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:08:06.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6635" for this suite.

• [SLOW TEST:6.447 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":161,"skipped":2647,"failed":0}
S
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:08:06.860: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service endpoint-test2 in namespace services-6648
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6648 to expose endpoints map[]
May 12 13:08:06.978: INFO: Get endpoints failed (9.843954ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
May 12 13:08:07.982: INFO: successfully validated that service endpoint-test2 in namespace services-6648 exposes endpoints map[] (1.014071501s elapsed)
STEP: Creating pod pod1 in namespace services-6648
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6648 to expose endpoints map[pod1:[80]]
May 12 13:08:12.293: INFO: successfully validated that service endpoint-test2 in namespace services-6648 exposes endpoints map[pod1:[80]] (4.303051969s elapsed)
STEP: Creating pod pod2 in namespace services-6648
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6648 to expose endpoints map[pod1:[80] pod2:[80]]
May 12 13:08:16.856: INFO: Unexpected endpoints: found map[fd3c1628-2239-4a27-a7d8-c0470c06eaad:[80]], expected map[pod1:[80] pod2:[80]] (4.558288056s elapsed, will retry)
May 12 13:08:17.861: INFO: successfully validated that service endpoint-test2 in namespace services-6648 exposes endpoints map[pod1:[80] pod2:[80]] (5.563268158s elapsed)
STEP: Deleting pod pod1 in namespace services-6648
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6648 to expose endpoints map[pod2:[80]]
May 12 13:08:18.976: INFO: successfully validated that service endpoint-test2 in namespace services-6648 exposes endpoints map[pod2:[80]] (1.112363289s elapsed)
STEP: Deleting pod pod2 in namespace services-6648
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6648 to expose endpoints map[]
May 12 13:08:20.046: INFO: successfully validated that service endpoint-test2 in namespace services-6648 exposes endpoints map[] (1.065968838s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:08:20.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6648" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:13.229 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":275,"completed":162,"skipped":2648,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:08:20.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
May 12 13:08:25.590: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:08:25.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4268" for this suite.

• [SLOW TEST:5.847 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":163,"skipped":2659,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:08:25.937: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
May 12 13:08:26.158: INFO: PodSpec: initContainers in spec.initContainers
May 12 13:09:22.677: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-9e61ee21-dafa-4b4f-a3a3-6927de12be2c", GenerateName:"", Namespace:"init-container-7444", SelfLink:"/api/v1/namespaces/init-container-7444/pods/pod-init-9e61ee21-dafa-4b4f-a3a3-6927de12be2c", UID:"bd5eac03-8799-42ee-b73c-5adde3342749", ResourceVersion:"3733343", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724885706, loc:(*time.Location)(0x7b200c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"158712165"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0033e06c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0033e06e0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0033e0700), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0033e0720)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-sz4cs", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0026af600), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-sz4cs", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-sz4cs", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-sz4cs", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0030cb628), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kali-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0022cb960), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0030cb6c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0030cb6f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0030cb6f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0030cb6fc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724885706, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724885706, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724885706, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724885706, loc:(*time.Location)(0x7b200c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.18", PodIP:"10.244.1.123", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.123"}}, StartTime:(*v1.Time)(0xc0033e0740), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0022cba40)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0022cbab0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://d9c1761a7e18678f275bd1193884df42dd88eae8417cde10800ec7cebb5a5673", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0033e0780), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0033e0760), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0030cb78f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:09:22.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7444" for this suite.

• [SLOW TEST:56.788 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":164,"skipped":2667,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:09:22.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:09:26.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-7126" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":165,"skipped":2674,"failed":0}
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:09:26.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
May 12 13:09:27.033: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
May 12 13:09:27.049: INFO: Waiting for terminating namespaces to be deleted...
May 12 13:09:27.051: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
May 12 13:09:27.056: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 12 13:09:27.056: INFO: 	Container kindnet-cni ready: true, restart count 1
May 12 13:09:27.056: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 12 13:09:27.056: INFO: 	Container kube-proxy ready: true, restart count 0
May 12 13:09:27.056: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
May 12 13:09:27.060: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 12 13:09:27.060: INFO: 	Container kube-proxy ready: true, restart count 0
May 12 13:09:27.060: INFO: pod-init-9e61ee21-dafa-4b4f-a3a3-6927de12be2c from init-container-7444 started at 2020-05-12 13:08:26 +0000 UTC (1 container statuses recorded)
May 12 13:09:27.060: INFO: 	Container run1 ready: false, restart count 0
May 12 13:09:27.060: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 12 13:09:27.060: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-92040875-8623-43f8-bc27-10776a7355d7 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-92040875-8623-43f8-bc27-10776a7355d7 off the node kali-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-92040875-8623-43f8-bc27-10776a7355d7
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:14:39.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6581" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:312.729 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":166,"skipped":2675,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:14:39.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:14:43.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1194" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":167,"skipped":2718,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:14:43.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
May 12 13:14:44.017: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:14:56.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6086" for this suite.

• [SLOW TEST:12.679 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":168,"skipped":2750,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:14:56.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-downwardapi-cnlq
STEP: Creating a pod to test atomic-volume-subpath
May 12 13:14:56.950: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-cnlq" in namespace "subpath-7091" to be "Succeeded or Failed"
May 12 13:14:56.998: INFO: Pod "pod-subpath-test-downwardapi-cnlq": Phase="Pending", Reason="", readiness=false. Elapsed: 47.738255ms
May 12 13:14:59.004: INFO: Pod "pod-subpath-test-downwardapi-cnlq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054164232s
May 12 13:15:01.008: INFO: Pod "pod-subpath-test-downwardapi-cnlq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058498173s
May 12 13:15:03.012: INFO: Pod "pod-subpath-test-downwardapi-cnlq": Phase="Running", Reason="", readiness=true. Elapsed: 6.062602319s
May 12 13:15:05.016: INFO: Pod "pod-subpath-test-downwardapi-cnlq": Phase="Running", Reason="", readiness=true. Elapsed: 8.065828369s
May 12 13:15:07.020: INFO: Pod "pod-subpath-test-downwardapi-cnlq": Phase="Running", Reason="", readiness=true. Elapsed: 10.069997302s
May 12 13:15:09.023: INFO: Pod "pod-subpath-test-downwardapi-cnlq": Phase="Running", Reason="", readiness=true. Elapsed: 12.073308842s
May 12 13:15:11.026: INFO: Pod "pod-subpath-test-downwardapi-cnlq": Phase="Running", Reason="", readiness=true. Elapsed: 14.076534061s
May 12 13:15:13.035: INFO: Pod "pod-subpath-test-downwardapi-cnlq": Phase="Running", Reason="", readiness=true. Elapsed: 16.084713396s
May 12 13:15:15.088: INFO: Pod "pod-subpath-test-downwardapi-cnlq": Phase="Running", Reason="", readiness=true. Elapsed: 18.137856498s
May 12 13:15:17.220: INFO: Pod "pod-subpath-test-downwardapi-cnlq": Phase="Running", Reason="", readiness=true. Elapsed: 20.270059379s
May 12 13:15:19.225: INFO: Pod "pod-subpath-test-downwardapi-cnlq": Phase="Running", Reason="", readiness=true. Elapsed: 22.275460792s
May 12 13:15:21.228: INFO: Pod "pod-subpath-test-downwardapi-cnlq": Phase="Running", Reason="", readiness=true. Elapsed: 24.278290382s
May 12 13:15:23.231: INFO: Pod "pod-subpath-test-downwardapi-cnlq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.281276181s
STEP: Saw pod success
May 12 13:15:23.231: INFO: Pod "pod-subpath-test-downwardapi-cnlq" satisfied condition "Succeeded or Failed"
May 12 13:15:23.233: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-downwardapi-cnlq container test-container-subpath-downwardapi-cnlq: 
STEP: delete the pod
May 12 13:15:23.278: INFO: Waiting for pod pod-subpath-test-downwardapi-cnlq to disappear
May 12 13:15:23.321: INFO: Pod pod-subpath-test-downwardapi-cnlq no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-cnlq
May 12 13:15:23.321: INFO: Deleting pod "pod-subpath-test-downwardapi-cnlq" in namespace "subpath-7091"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:15:23.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7091" for this suite.

• [SLOW TEST:26.765 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":169,"skipped":2787,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:15:23.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-08929279-f816-41bd-b7b9-7cfa1cf20dff
STEP: Creating a pod to test consume secrets
May 12 13:15:23.453: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7bf63662-251a-4d9a-aa58-ae542ae576da" in namespace "projected-5560" to be "Succeeded or Failed"
May 12 13:15:23.524: INFO: Pod "pod-projected-secrets-7bf63662-251a-4d9a-aa58-ae542ae576da": Phase="Pending", Reason="", readiness=false. Elapsed: 70.529901ms
May 12 13:15:25.741: INFO: Pod "pod-projected-secrets-7bf63662-251a-4d9a-aa58-ae542ae576da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288465937s
May 12 13:15:27.745: INFO: Pod "pod-projected-secrets-7bf63662-251a-4d9a-aa58-ae542ae576da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.291790531s
STEP: Saw pod success
May 12 13:15:27.745: INFO: Pod "pod-projected-secrets-7bf63662-251a-4d9a-aa58-ae542ae576da" satisfied condition "Succeeded or Failed"
May 12 13:15:27.747: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-7bf63662-251a-4d9a-aa58-ae542ae576da container projected-secret-volume-test: 
STEP: delete the pod
May 12 13:15:27.859: INFO: Waiting for pod pod-projected-secrets-7bf63662-251a-4d9a-aa58-ae542ae576da to disappear
May 12 13:15:27.883: INFO: Pod pod-projected-secrets-7bf63662-251a-4d9a-aa58-ae542ae576da no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:15:27.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5560" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":170,"skipped":2852,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:15:27.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching services
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:15:28.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1885" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":171,"skipped":2861,"failed":0}
SSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:15:28.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-7579f6c6-3479-46aa-88f3-a6c6b9d0e4bd
STEP: Creating a pod to test consume secrets
May 12 13:15:28.274: INFO: Waiting up to 5m0s for pod "pod-secrets-702785b7-b8f9-42c2-9358-6156820fee84" in namespace "secrets-8525" to be "Succeeded or Failed"
May 12 13:15:28.308: INFO: Pod "pod-secrets-702785b7-b8f9-42c2-9358-6156820fee84": Phase="Pending", Reason="", readiness=false. Elapsed: 34.15843ms
May 12 13:15:30.532: INFO: Pod "pod-secrets-702785b7-b8f9-42c2-9358-6156820fee84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.258203508s
May 12 13:15:32.536: INFO: Pod "pod-secrets-702785b7-b8f9-42c2-9358-6156820fee84": Phase="Running", Reason="", readiness=true. Elapsed: 4.261691098s
May 12 13:15:34.540: INFO: Pod "pod-secrets-702785b7-b8f9-42c2-9358-6156820fee84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.265877979s
STEP: Saw pod success
May 12 13:15:34.540: INFO: Pod "pod-secrets-702785b7-b8f9-42c2-9358-6156820fee84" satisfied condition "Succeeded or Failed"
May 12 13:15:34.543: INFO: Trying to get logs from node kali-worker pod pod-secrets-702785b7-b8f9-42c2-9358-6156820fee84 container secret-volume-test: 
STEP: delete the pod
May 12 13:15:34.574: INFO: Waiting for pod pod-secrets-702785b7-b8f9-42c2-9358-6156820fee84 to disappear
May 12 13:15:34.590: INFO: Pod pod-secrets-702785b7-b8f9-42c2-9358-6156820fee84 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:15:34.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8525" for this suite.
STEP: Destroying namespace "secret-namespace-5823" for this suite.

• [SLOW TEST:6.582 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":172,"skipped":2865,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:15:34.622: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 12 13:15:35.438: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 12 13:15:38.064: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724886135, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724886135, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724886135, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724886135, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 12 13:15:41.251: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:15:41.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3984" for this suite.
STEP: Destroying namespace "webhook-3984-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.952 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":173,"skipped":2877,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:15:41.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 12 13:15:41.702: INFO: Waiting up to 5m0s for pod "downwardapi-volume-80e3d901-4d33-4e0a-8c7e-0eb26d91fff7" in namespace "downward-api-1946" to be "Succeeded or Failed"
May 12 13:15:41.729: INFO: Pod "downwardapi-volume-80e3d901-4d33-4e0a-8c7e-0eb26d91fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 27.06876ms
May 12 13:15:43.732: INFO: Pod "downwardapi-volume-80e3d901-4d33-4e0a-8c7e-0eb26d91fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029913422s
May 12 13:15:45.737: INFO: Pod "downwardapi-volume-80e3d901-4d33-4e0a-8c7e-0eb26d91fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035100922s
STEP: Saw pod success
May 12 13:15:45.737: INFO: Pod "downwardapi-volume-80e3d901-4d33-4e0a-8c7e-0eb26d91fff7" satisfied condition "Succeeded or Failed"
May 12 13:15:45.741: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-80e3d901-4d33-4e0a-8c7e-0eb26d91fff7 container client-container: 
STEP: delete the pod
May 12 13:15:45.772: INFO: Waiting for pod downwardapi-volume-80e3d901-4d33-4e0a-8c7e-0eb26d91fff7 to disappear
May 12 13:15:45.788: INFO: Pod downwardapi-volume-80e3d901-4d33-4e0a-8c7e-0eb26d91fff7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:15:45.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1946" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":174,"skipped":2929,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:15:45.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 12 13:15:46.187: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"cf9a0c86-2359-4a10-9c36-51e3ab9ec035", Controller:(*bool)(0xc0021120a2), BlockOwnerDeletion:(*bool)(0xc0021120a3)}}
May 12 13:15:46.198: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"5c880e76-8ced-458a-b448-471084df0904", Controller:(*bool)(0xc00211239a), BlockOwnerDeletion:(*bool)(0xc00211239b)}}
May 12 13:15:46.280: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"eb5b09a0-a0db-43a4-b634-3ec75937ea86", Controller:(*bool)(0xc003020c6a), BlockOwnerDeletion:(*bool)(0xc003020c6b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:15:51.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1633" for this suite.

• [SLOW TEST:5.460 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":175,"skipped":2934,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:15:51.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-9581
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-9581
STEP: Creating statefulset with conflicting port in namespace statefulset-9581
STEP: Waiting until pod test-pod will start running in namespace statefulset-9581
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9581
May 12 13:15:57.880: INFO: Observed stateful pod in namespace: statefulset-9581, name: ss-0, uid: bdcf73a9-42b6-496e-95be-8d8fd7f00f22, status phase: Failed. Waiting for statefulset controller to delete.
May 12 13:15:57.918: INFO: Observed stateful pod in namespace: statefulset-9581, name: ss-0, uid: bdcf73a9-42b6-496e-95be-8d8fd7f00f22, status phase: Failed. Waiting for statefulset controller to delete.
May 12 13:15:58.181: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9581
STEP: Removing pod with conflicting port in namespace statefulset-9581
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9581 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
May 12 13:16:02.527: INFO: Deleting all statefulset in ns statefulset-9581
May 12 13:16:02.529: INFO: Scaling statefulset ss to 0
May 12 13:16:22.646: INFO: Waiting for statefulset status.replicas updated to 0
May 12 13:16:22.649: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:16:22.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9581" for this suite.

• [SLOW TEST:31.424 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":176,"skipped":2948,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:16:22.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
May 12 13:16:24.163: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:16:42.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-408" for this suite.

• [SLOW TEST:19.835 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":177,"skipped":2960,"failed":0}
SSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:16:42.564: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 12 13:16:43.071: INFO: (0) /api/v1/nodes/kali-worker2:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
May 12 13:16:43.583: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:16:43.588: INFO: Number of nodes with available pods: 0
May 12 13:16:43.588: INFO: Node kali-worker is running more than one daemon pod
May 12 13:16:44.593: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:16:44.596: INFO: Number of nodes with available pods: 0
May 12 13:16:44.596: INFO: Node kali-worker is running more than one daemon pod
May 12 13:16:45.592: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:16:45.595: INFO: Number of nodes with available pods: 0
May 12 13:16:45.595: INFO: Node kali-worker is running more than one daemon pod
May 12 13:16:46.900: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:16:46.925: INFO: Number of nodes with available pods: 0
May 12 13:16:46.925: INFO: Node kali-worker is running more than one daemon pod
May 12 13:16:47.592: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:16:47.595: INFO: Number of nodes with available pods: 0
May 12 13:16:47.595: INFO: Node kali-worker is running more than one daemon pod
May 12 13:16:49.019: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:16:49.056: INFO: Number of nodes with available pods: 0
May 12 13:16:49.056: INFO: Node kali-worker is running more than one daemon pod
May 12 13:16:49.648: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:16:50.056: INFO: Number of nodes with available pods: 0
May 12 13:16:50.056: INFO: Node kali-worker is running more than one daemon pod
May 12 13:16:51.232: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:16:51.279: INFO: Number of nodes with available pods: 1
May 12 13:16:51.279: INFO: Node kali-worker is running more than one daemon pod
May 12 13:16:51.623: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:16:51.676: INFO: Number of nodes with available pods: 1
May 12 13:16:51.676: INFO: Node kali-worker is running more than one daemon pod
May 12 13:16:52.754: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:16:52.868: INFO: Number of nodes with available pods: 2
May 12 13:16:52.868: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
May 12 13:16:53.469: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:16:53.891: INFO: Number of nodes with available pods: 1
May 12 13:16:53.891: INFO: Node kali-worker2 is running more than one daemon pod
May 12 13:16:54.929: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:16:54.932: INFO: Number of nodes with available pods: 1
May 12 13:16:54.932: INFO: Node kali-worker2 is running more than one daemon pod
May 12 13:16:55.922: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:16:55.925: INFO: Number of nodes with available pods: 1
May 12 13:16:55.925: INFO: Node kali-worker2 is running more than one daemon pod
May 12 13:16:56.918: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:16:56.927: INFO: Number of nodes with available pods: 1
May 12 13:16:56.927: INFO: Node kali-worker2 is running more than one daemon pod
May 12 13:16:57.896: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:16:57.900: INFO: Number of nodes with available pods: 2
May 12 13:16:57.900: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5705, will wait for the garbage collector to delete the pods
May 12 13:16:57.962: INFO: Deleting DaemonSet.extensions daemon-set took: 4.858539ms
May 12 13:16:58.262: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.233822ms
May 12 13:17:13.729: INFO: Number of nodes with available pods: 0
May 12 13:17:13.729: INFO: Number of running nodes: 0, number of available pods: 0
May 12 13:17:13.731: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5705/daemonsets","resourceVersion":"3735333"},"items":null}

May 12 13:17:13.805: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5705/pods","resourceVersion":"3735334"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:17:13.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5705" for this suite.

• [SLOW TEST:30.617 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":179,"skipped":2980,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:17:13.874: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 12 13:17:14.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
May 12 13:17:17.271: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5652 create -f -'
May 12 13:17:18.100: INFO: stderr: ""
May 12 13:17:18.100: INFO: stdout: "e2e-test-crd-publish-openapi-9605-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
May 12 13:17:18.100: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5652 delete e2e-test-crd-publish-openapi-9605-crds test-cr'
May 12 13:17:18.242: INFO: stderr: ""
May 12 13:17:18.242: INFO: stdout: "e2e-test-crd-publish-openapi-9605-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
May 12 13:17:18.242: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5652 apply -f -'
May 12 13:17:18.490: INFO: stderr: ""
May 12 13:17:18.490: INFO: stdout: "e2e-test-crd-publish-openapi-9605-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
May 12 13:17:18.490: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5652 delete e2e-test-crd-publish-openapi-9605-crds test-cr'
May 12 13:17:18.608: INFO: stderr: ""
May 12 13:17:18.608: INFO: stdout: "e2e-test-crd-publish-openapi-9605-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
May 12 13:17:18.608: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9605-crds'
May 12 13:17:18.986: INFO: stderr: ""
May 12 13:17:18.986: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9605-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:17:21.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5652" for this suite.

• [SLOW TEST:8.026 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":180,"skipped":2997,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:17:21.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-secret-mpn9
STEP: Creating a pod to test atomic-volume-subpath
May 12 13:17:22.085: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-mpn9" in namespace "subpath-5264" to be "Succeeded or Failed"
May 12 13:17:22.105: INFO: Pod "pod-subpath-test-secret-mpn9": Phase="Pending", Reason="", readiness=false. Elapsed: 19.923061ms
May 12 13:17:24.579: INFO: Pod "pod-subpath-test-secret-mpn9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.493664593s
May 12 13:17:26.592: INFO: Pod "pod-subpath-test-secret-mpn9": Phase="Running", Reason="", readiness=true. Elapsed: 4.506573645s
May 12 13:17:28.596: INFO: Pod "pod-subpath-test-secret-mpn9": Phase="Running", Reason="", readiness=true. Elapsed: 6.510713197s
May 12 13:17:30.600: INFO: Pod "pod-subpath-test-secret-mpn9": Phase="Running", Reason="", readiness=true. Elapsed: 8.514932693s
May 12 13:17:32.604: INFO: Pod "pod-subpath-test-secret-mpn9": Phase="Running", Reason="", readiness=true. Elapsed: 10.518696152s
May 12 13:17:34.608: INFO: Pod "pod-subpath-test-secret-mpn9": Phase="Running", Reason="", readiness=true. Elapsed: 12.522410762s
May 12 13:17:36.612: INFO: Pod "pod-subpath-test-secret-mpn9": Phase="Running", Reason="", readiness=true. Elapsed: 14.526439218s
May 12 13:17:38.615: INFO: Pod "pod-subpath-test-secret-mpn9": Phase="Running", Reason="", readiness=true. Elapsed: 16.529770554s
May 12 13:17:40.620: INFO: Pod "pod-subpath-test-secret-mpn9": Phase="Running", Reason="", readiness=true. Elapsed: 18.534292328s
May 12 13:17:42.624: INFO: Pod "pod-subpath-test-secret-mpn9": Phase="Running", Reason="", readiness=true. Elapsed: 20.538464947s
May 12 13:17:44.627: INFO: Pod "pod-subpath-test-secret-mpn9": Phase="Running", Reason="", readiness=true. Elapsed: 22.541792882s
May 12 13:17:46.630: INFO: Pod "pod-subpath-test-secret-mpn9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.545076356s
STEP: Saw pod success
May 12 13:17:46.630: INFO: Pod "pod-subpath-test-secret-mpn9" satisfied condition "Succeeded or Failed"
May 12 13:17:46.633: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-secret-mpn9 container test-container-subpath-secret-mpn9: 
STEP: delete the pod
May 12 13:17:46.774: INFO: Waiting for pod pod-subpath-test-secret-mpn9 to disappear
May 12 13:17:47.006: INFO: Pod pod-subpath-test-secret-mpn9 no longer exists
STEP: Deleting pod pod-subpath-test-secret-mpn9
May 12 13:17:47.006: INFO: Deleting pod "pod-subpath-test-secret-mpn9" in namespace "subpath-5264"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:17:47.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5264" for this suite.

• [SLOW TEST:25.115 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":181,"skipped":3004,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:17:47.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service nodeport-test with type=NodePort in namespace services-5480
STEP: creating replication controller nodeport-test in namespace services-5480
I0512 13:17:47.483203       7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-5480, replica count: 2
I0512 13:17:50.533675       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0512 13:17:53.533830       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May 12 13:17:53.533: INFO: Creating new exec pod
May 12 13:17:58.591: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-5480 execpoddsr2g -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
May 12 13:18:14.828: INFO: stderr: "I0512 13:18:14.318670    2310 log.go:172] (0xc000afa0b0) (0xc0006bc140) Create stream\nI0512 13:18:14.318702    2310 log.go:172] (0xc000afa0b0) (0xc0006bc140) Stream added, broadcasting: 1\nI0512 13:18:14.320717    2310 log.go:172] (0xc000afa0b0) Reply frame received for 1\nI0512 13:18:14.320757    2310 log.go:172] (0xc000afa0b0) (0xc000720000) Create stream\nI0512 13:18:14.320771    2310 log.go:172] (0xc000afa0b0) (0xc000720000) Stream added, broadcasting: 3\nI0512 13:18:14.321591    2310 log.go:172] (0xc000afa0b0) Reply frame received for 3\nI0512 13:18:14.321613    2310 log.go:172] (0xc000afa0b0) (0xc000827540) Create stream\nI0512 13:18:14.321621    2310 log.go:172] (0xc000afa0b0) (0xc000827540) Stream added, broadcasting: 5\nI0512 13:18:14.322252    2310 log.go:172] (0xc000afa0b0) Reply frame received for 5\nI0512 13:18:14.377561    2310 log.go:172] (0xc000afa0b0) Data frame received for 5\nI0512 13:18:14.377577    2310 log.go:172] (0xc000827540) (5) Data frame handling\nI0512 13:18:14.377586    2310 log.go:172] (0xc000827540) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0512 13:18:14.821773    2310 log.go:172] (0xc000afa0b0) Data frame received for 5\nI0512 13:18:14.821798    2310 log.go:172] (0xc000827540) (5) Data frame handling\nI0512 13:18:14.821809    2310 log.go:172] (0xc000827540) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0512 13:18:14.822202    2310 log.go:172] (0xc000afa0b0) Data frame received for 3\nI0512 13:18:14.822217    2310 log.go:172] (0xc000720000) (3) Data frame handling\nI0512 13:18:14.822470    2310 log.go:172] (0xc000afa0b0) Data frame received for 5\nI0512 13:18:14.822483    2310 log.go:172] (0xc000827540) (5) Data frame handling\nI0512 13:18:14.824055    2310 log.go:172] (0xc000afa0b0) Data frame received for 1\nI0512 13:18:14.824067    2310 log.go:172] (0xc0006bc140) (1) Data frame handling\nI0512 13:18:14.824080    2310 log.go:172] (0xc0006bc140) (1) Data frame sent\nI0512 13:18:14.824090    2310 log.go:172] (0xc000afa0b0) (0xc0006bc140) Stream removed, broadcasting: 1\nI0512 13:18:14.824099    2310 log.go:172] (0xc000afa0b0) Go away received\nI0512 13:18:14.824393    2310 log.go:172] (0xc000afa0b0) (0xc0006bc140) Stream removed, broadcasting: 1\nI0512 13:18:14.824409    2310 log.go:172] (0xc000afa0b0) (0xc000720000) Stream removed, broadcasting: 3\nI0512 13:18:14.824414    2310 log.go:172] (0xc000afa0b0) (0xc000827540) Stream removed, broadcasting: 5\n"
May 12 13:18:14.828: INFO: stdout: ""
May 12 13:18:14.829: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-5480 execpoddsr2g -- /bin/sh -x -c nc -zv -t -w 2 10.106.253.29 80'
May 12 13:18:15.440: INFO: stderr: "I0512 13:18:15.379238    2346 log.go:172] (0xc0007d0000) (0xc0007d4000) Create stream\nI0512 13:18:15.379288    2346 log.go:172] (0xc0007d0000) (0xc0007d4000) Stream added, broadcasting: 1\nI0512 13:18:15.381801    2346 log.go:172] (0xc0007d0000) Reply frame received for 1\nI0512 13:18:15.381834    2346 log.go:172] (0xc0007d0000) (0xc0007d40a0) Create stream\nI0512 13:18:15.381851    2346 log.go:172] (0xc0007d0000) (0xc0007d40a0) Stream added, broadcasting: 3\nI0512 13:18:15.382605    2346 log.go:172] (0xc0007d0000) Reply frame received for 3\nI0512 13:18:15.382649    2346 log.go:172] (0xc0007d0000) (0xc0007be140) Create stream\nI0512 13:18:15.382667    2346 log.go:172] (0xc0007d0000) (0xc0007be140) Stream added, broadcasting: 5\nI0512 13:18:15.383358    2346 log.go:172] (0xc0007d0000) Reply frame received for 5\nI0512 13:18:15.434979    2346 log.go:172] (0xc0007d0000) Data frame received for 5\nI0512 13:18:15.435007    2346 log.go:172] (0xc0007be140) (5) Data frame handling\nI0512 13:18:15.435018    2346 log.go:172] (0xc0007be140) (5) Data frame sent\nI0512 13:18:15.435025    2346 log.go:172] (0xc0007d0000) Data frame received for 5\nI0512 13:18:15.435030    2346 log.go:172] (0xc0007be140) (5) Data frame handling\n+ nc -zv -t -w 2 10.106.253.29 80\nConnection to 10.106.253.29 80 port [tcp/http] succeeded!\nI0512 13:18:15.435044    2346 log.go:172] (0xc0007d0000) Data frame received for 3\nI0512 13:18:15.435051    2346 log.go:172] (0xc0007d40a0) (3) Data frame handling\nI0512 13:18:15.436421    2346 log.go:172] (0xc0007d0000) Data frame received for 1\nI0512 13:18:15.436437    2346 log.go:172] (0xc0007d4000) (1) Data frame handling\nI0512 13:18:15.436448    2346 log.go:172] (0xc0007d4000) (1) Data frame sent\nI0512 13:18:15.436537    2346 log.go:172] (0xc0007d0000) (0xc0007d4000) Stream removed, broadcasting: 1\nI0512 13:18:15.436689    2346 log.go:172] (0xc0007d0000) Go away received\nI0512 13:18:15.436785    2346 log.go:172] (0xc0007d0000) (0xc0007d4000) Stream removed, broadcasting: 1\nI0512 13:18:15.436797    2346 log.go:172] (0xc0007d0000) (0xc0007d40a0) Stream removed, broadcasting: 3\nI0512 13:18:15.436804    2346 log.go:172] (0xc0007d0000) (0xc0007be140) Stream removed, broadcasting: 5\n"
May 12 13:18:15.440: INFO: stdout: ""
May 12 13:18:15.440: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-5480 execpoddsr2g -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.15 32650'
May 12 13:18:15.942: INFO: stderr: "I0512 13:18:15.877860    2365 log.go:172] (0xc000a83130) (0xc000bbc5a0) Create stream\nI0512 13:18:15.877896    2365 log.go:172] (0xc000a83130) (0xc000bbc5a0) Stream added, broadcasting: 1\nI0512 13:18:15.881325    2365 log.go:172] (0xc000a83130) Reply frame received for 1\nI0512 13:18:15.881363    2365 log.go:172] (0xc000a83130) (0xc0004e8a00) Create stream\nI0512 13:18:15.881375    2365 log.go:172] (0xc000a83130) (0xc0004e8a00) Stream added, broadcasting: 3\nI0512 13:18:15.882280    2365 log.go:172] (0xc000a83130) Reply frame received for 3\nI0512 13:18:15.882310    2365 log.go:172] (0xc000a83130) (0xc000966000) Create stream\nI0512 13:18:15.882321    2365 log.go:172] (0xc000a83130) (0xc000966000) Stream added, broadcasting: 5\nI0512 13:18:15.883002    2365 log.go:172] (0xc000a83130) Reply frame received for 5\nI0512 13:18:15.938006    2365 log.go:172] (0xc000a83130) Data frame received for 5\nI0512 13:18:15.938024    2365 log.go:172] (0xc000966000) (5) Data frame handling\nI0512 13:18:15.938034    2365 log.go:172] (0xc000966000) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.15 32650\nI0512 13:18:15.938125    2365 log.go:172] (0xc000a83130) Data frame received for 5\nI0512 13:18:15.938153    2365 log.go:172] (0xc000966000) (5) Data frame handling\nI0512 13:18:15.938171    2365 log.go:172] (0xc000966000) (5) Data frame sent\nConnection to 172.17.0.15 32650 port [tcp/32650] succeeded!\nI0512 13:18:15.938583    2365 log.go:172] (0xc000a83130) Data frame received for 3\nI0512 13:18:15.938615    2365 log.go:172] (0xc000a83130) Data frame received for 5\nI0512 13:18:15.938638    2365 log.go:172] (0xc000966000) (5) Data frame handling\nI0512 13:18:15.938654    2365 log.go:172] (0xc0004e8a00) (3) Data frame handling\nI0512 13:18:15.939796    2365 log.go:172] (0xc000a83130) Data frame received for 1\nI0512 13:18:15.939809    2365 log.go:172] (0xc000bbc5a0) (1) Data frame handling\nI0512 13:18:15.939817    2365 log.go:172] (0xc000bbc5a0) (1) Data frame sent\nI0512 13:18:15.939921    2365 log.go:172] (0xc000a83130) (0xc000bbc5a0) Stream removed, broadcasting: 1\nI0512 13:18:15.940025    2365 log.go:172] (0xc000a83130) Go away received\nI0512 13:18:15.940133    2365 log.go:172] (0xc000a83130) (0xc000bbc5a0) Stream removed, broadcasting: 1\nI0512 13:18:15.940144    2365 log.go:172] (0xc000a83130) (0xc0004e8a00) Stream removed, broadcasting: 3\nI0512 13:18:15.940149    2365 log.go:172] (0xc000a83130) (0xc000966000) Stream removed, broadcasting: 5\n"
May 12 13:18:15.943: INFO: stdout: ""
May 12 13:18:15.943: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-5480 execpoddsr2g -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 32650'
May 12 13:18:16.133: INFO: stderr: "I0512 13:18:16.058101    2385 log.go:172] (0xc000510210) (0xc000b18000) Create stream\nI0512 13:18:16.058158    2385 log.go:172] (0xc000510210) (0xc000b18000) Stream added, broadcasting: 1\nI0512 13:18:16.060505    2385 log.go:172] (0xc000510210) Reply frame received for 1\nI0512 13:18:16.060526    2385 log.go:172] (0xc000510210) (0xc0009205a0) Create stream\nI0512 13:18:16.060531    2385 log.go:172] (0xc000510210) (0xc0009205a0) Stream added, broadcasting: 3\nI0512 13:18:16.061554    2385 log.go:172] (0xc000510210) Reply frame received for 3\nI0512 13:18:16.061587    2385 log.go:172] (0xc000510210) (0xc000b180a0) Create stream\nI0512 13:18:16.061598    2385 log.go:172] (0xc000510210) (0xc000b180a0) Stream added, broadcasting: 5\nI0512 13:18:16.062388    2385 log.go:172] (0xc000510210) Reply frame received for 5\nI0512 13:18:16.127331    2385 log.go:172] (0xc000510210) Data frame received for 5\nI0512 13:18:16.127359    2385 log.go:172] (0xc000b180a0) (5) Data frame handling\nI0512 13:18:16.127368    2385 log.go:172] (0xc000b180a0) (5) Data frame sent\nI0512 13:18:16.127375    2385 log.go:172] (0xc000510210) Data frame received for 5\nI0512 13:18:16.127381    2385 log.go:172] (0xc000b180a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.18 32650\nConnection to 172.17.0.18 32650 port [tcp/32650] succeeded!\nI0512 13:18:16.127398    2385 log.go:172] (0xc000510210) Data frame received for 3\nI0512 13:18:16.127406    2385 log.go:172] (0xc0009205a0) (3) Data frame handling\nI0512 13:18:16.128328    2385 log.go:172] (0xc000510210) Data frame received for 1\nI0512 13:18:16.128382    2385 log.go:172] (0xc000b18000) (1) Data frame handling\nI0512 13:18:16.128392    2385 log.go:172] (0xc000b18000) (1) Data frame sent\nI0512 13:18:16.128402    2385 log.go:172] (0xc000510210) (0xc000b18000) Stream removed, broadcasting: 1\nI0512 13:18:16.128420    2385 log.go:172] (0xc000510210) Go away received\nI0512 13:18:16.128688    2385 log.go:172] (0xc000510210) (0xc000b18000) Stream removed, broadcasting: 1\nI0512 13:18:16.128714    2385 log.go:172] (0xc000510210) (0xc0009205a0) Stream removed, broadcasting: 3\nI0512 13:18:16.128727    2385 log.go:172] (0xc000510210) (0xc000b180a0) Stream removed, broadcasting: 5\n"
May 12 13:18:16.133: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:18:16.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5480" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:29.123 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":182,"skipped":3016,"failed":0}
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:18:16.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 12 13:18:16.680: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c82f818c-a126-49da-a7ce-af431cd6d9e7" in namespace "downward-api-9053" to be "Succeeded or Failed"
May 12 13:18:16.738: INFO: Pod "downwardapi-volume-c82f818c-a126-49da-a7ce-af431cd6d9e7": Phase="Pending", Reason="", readiness=false. Elapsed: 57.798314ms
May 12 13:18:18.742: INFO: Pod "downwardapi-volume-c82f818c-a126-49da-a7ce-af431cd6d9e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061819406s
May 12 13:18:21.020: INFO: Pod "downwardapi-volume-c82f818c-a126-49da-a7ce-af431cd6d9e7": Phase="Running", Reason="", readiness=true. Elapsed: 4.339918086s
May 12 13:18:23.500: INFO: Pod "downwardapi-volume-c82f818c-a126-49da-a7ce-af431cd6d9e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.82037854s
STEP: Saw pod success
May 12 13:18:23.500: INFO: Pod "downwardapi-volume-c82f818c-a126-49da-a7ce-af431cd6d9e7" satisfied condition "Succeeded or Failed"
May 12 13:18:23.701: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-c82f818c-a126-49da-a7ce-af431cd6d9e7 container client-container: 
STEP: delete the pod
May 12 13:18:24.743: INFO: Waiting for pod downwardapi-volume-c82f818c-a126-49da-a7ce-af431cd6d9e7 to disappear
May 12 13:18:24.748: INFO: Pod downwardapi-volume-c82f818c-a126-49da-a7ce-af431cd6d9e7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:18:24.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9053" for this suite.

• [SLOW TEST:8.767 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":183,"skipped":3016,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:18:24.906: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:18:33.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3235" for this suite.

• [SLOW TEST:9.266 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":184,"skipped":3048,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:18:34.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating cluster-info
May 12 13:18:36.745: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config cluster-info'
May 12 13:18:37.003: INFO: stderr: ""
May 12 13:18:37.003: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32772\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32772/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:18:37.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4345" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":275,"completed":185,"skipped":3060,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:18:37.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:19:31.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8888" for this suite.

• [SLOW TEST:53.120 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":186,"skipped":3072,"failed":0}
SSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:19:31.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod test-webserver-cdcd8520-52df-4476-bd09-08e02b6b4144 in namespace container-probe-3124
May 12 13:19:37.312: INFO: Started pod test-webserver-cdcd8520-52df-4476-bd09-08e02b6b4144 in namespace container-probe-3124
STEP: checking the pod's current state and verifying that restartCount is present
May 12 13:19:37.315: INFO: Initial restart count of pod test-webserver-cdcd8520-52df-4476-bd09-08e02b6b4144 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:23:37.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3124" for this suite.

• [SLOW TEST:247.571 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":187,"skipped":3078,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:23:38.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
May 12 13:23:40.288: INFO: created pod pod-service-account-defaultsa
May 12 13:23:40.288: INFO: pod pod-service-account-defaultsa service account token volume mount: true
May 12 13:23:40.318: INFO: created pod pod-service-account-mountsa
May 12 13:23:40.318: INFO: pod pod-service-account-mountsa service account token volume mount: true
May 12 13:23:40.506: INFO: created pod pod-service-account-nomountsa
May 12 13:23:40.506: INFO: pod pod-service-account-nomountsa service account token volume mount: false
May 12 13:23:40.554: INFO: created pod pod-service-account-defaultsa-mountspec
May 12 13:23:40.554: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
May 12 13:23:40.841: INFO: created pod pod-service-account-mountsa-mountspec
May 12 13:23:40.841: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
May 12 13:23:41.358: INFO: created pod pod-service-account-nomountsa-mountspec
May 12 13:23:41.359: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
May 12 13:23:41.434: INFO: created pod pod-service-account-defaultsa-nomountspec
May 12 13:23:41.434: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
May 12 13:23:41.700: INFO: created pod pod-service-account-mountsa-nomountspec
May 12 13:23:41.700: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
May 12 13:23:41.737: INFO: created pod pod-service-account-nomountsa-nomountspec
May 12 13:23:41.737: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:23:41.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8197" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":275,"completed":188,"skipped":3103,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:23:42.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-525
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-525
I0512 13:23:46.278458       7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-525, replica count: 2
I0512 13:23:49.328987       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0512 13:23:52.329299       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0512 13:23:55.329534       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0512 13:23:58.329775       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0512 13:24:01.330089       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May 12 13:24:01.330: INFO: Creating new exec pod
May 12 13:24:08.657: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-525 execpodlt6qz -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
May 12 13:24:08.883: INFO: stderr: "I0512 13:24:08.781306    2426 log.go:172] (0xc00076e000) (0xc0002ac0a0) Create stream\nI0512 13:24:08.781363    2426 log.go:172] (0xc00076e000) (0xc0002ac0a0) Stream added, broadcasting: 1\nI0512 13:24:08.782941    2426 log.go:172] (0xc00076e000) Reply frame received for 1\nI0512 13:24:08.782978    2426 log.go:172] (0xc00076e000) (0xc000867d60) Create stream\nI0512 13:24:08.782988    2426 log.go:172] (0xc00076e000) (0xc000867d60) Stream added, broadcasting: 3\nI0512 13:24:08.784067    2426 log.go:172] (0xc00076e000) Reply frame received for 3\nI0512 13:24:08.784108    2426 log.go:172] (0xc00076e000) (0xc0009ba000) Create stream\nI0512 13:24:08.784125    2426 log.go:172] (0xc00076e000) (0xc0009ba000) Stream added, broadcasting: 5\nI0512 13:24:08.784931    2426 log.go:172] (0xc00076e000) Reply frame received for 5\nI0512 13:24:08.877101    2426 log.go:172] (0xc00076e000) Data frame received for 5\nI0512 13:24:08.877407    2426 log.go:172] (0xc0009ba000) (5) Data frame handling\nI0512 13:24:08.877444    2426 log.go:172] (0xc0009ba000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0512 13:24:08.877956    2426 log.go:172] (0xc00076e000) Data frame received for 5\nI0512 13:24:08.877974    2426 log.go:172] (0xc0009ba000) (5) Data frame handling\nI0512 13:24:08.877992    2426 log.go:172] (0xc0009ba000) (5) Data frame sent\nI0512 13:24:08.878003    2426 log.go:172] (0xc00076e000) Data frame received for 5\nI0512 13:24:08.878012    2426 log.go:172] (0xc0009ba000) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0512 13:24:08.878097    2426 log.go:172] (0xc00076e000) Data frame received for 3\nI0512 13:24:08.878114    2426 log.go:172] (0xc000867d60) (3) Data frame handling\nI0512 13:24:08.879485    2426 log.go:172] (0xc00076e000) Data frame received for 1\nI0512 13:24:08.879497    2426 log.go:172] (0xc0002ac0a0) (1) Data frame handling\nI0512 13:24:08.879503    2426 log.go:172] (0xc0002ac0a0) (1) Data frame sent\nI0512 13:24:08.879510    2426 log.go:172] (0xc00076e000) (0xc0002ac0a0) Stream removed, broadcasting: 1\nI0512 13:24:08.879561    2426 log.go:172] (0xc00076e000) Go away received\nI0512 13:24:08.879734    2426 log.go:172] (0xc00076e000) (0xc0002ac0a0) Stream removed, broadcasting: 1\nI0512 13:24:08.879750    2426 log.go:172] (0xc00076e000) (0xc000867d60) Stream removed, broadcasting: 3\nI0512 13:24:08.879758    2426 log.go:172] (0xc00076e000) (0xc0009ba000) Stream removed, broadcasting: 5\n"
May 12 13:24:08.883: INFO: stdout: ""
May 12 13:24:08.884: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-525 execpodlt6qz -- /bin/sh -x -c nc -zv -t -w 2 10.104.230.95 80'
May 12 13:24:09.088: INFO: stderr: "I0512 13:24:09.023627    2446 log.go:172] (0xc000b62d10) (0xc000aae3c0) Create stream\nI0512 13:24:09.023713    2446 log.go:172] (0xc000b62d10) (0xc000aae3c0) Stream added, broadcasting: 1\nI0512 13:24:09.026608    2446 log.go:172] (0xc000b62d10) Reply frame received for 1\nI0512 13:24:09.026644    2446 log.go:172] (0xc000b62d10) (0xc000996000) Create stream\nI0512 13:24:09.026665    2446 log.go:172] (0xc000b62d10) (0xc000996000) Stream added, broadcasting: 3\nI0512 13:24:09.027330    2446 log.go:172] (0xc000b62d10) Reply frame received for 3\nI0512 13:24:09.027373    2446 log.go:172] (0xc000b62d10) (0xc000996140) Create stream\nI0512 13:24:09.027393    2446 log.go:172] (0xc000b62d10) (0xc000996140) Stream added, broadcasting: 5\nI0512 13:24:09.028077    2446 log.go:172] (0xc000b62d10) Reply frame received for 5\nI0512 13:24:09.080152    2446 log.go:172] (0xc000b62d10) Data frame received for 3\nI0512 13:24:09.080232    2446 log.go:172] (0xc000996000) (3) Data frame handling\nI0512 13:24:09.080259    2446 log.go:172] (0xc000b62d10) Data frame received for 5\nI0512 13:24:09.080278    2446 log.go:172] (0xc000996140) (5) Data frame handling\nI0512 13:24:09.080297    2446 log.go:172] (0xc000996140) (5) Data frame sent\nI0512 13:24:09.080333    2446 log.go:172] (0xc000b62d10) Data frame received for 5\nI0512 13:24:09.080360    2446 log.go:172] (0xc000996140) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.230.95 80\nConnection to 10.104.230.95 80 port [tcp/http] succeeded!\nI0512 13:24:09.082865    2446 log.go:172] (0xc000b62d10) Data frame received for 1\nI0512 13:24:09.082905    2446 log.go:172] (0xc000aae3c0) (1) Data frame handling\nI0512 13:24:09.082924    2446 log.go:172] (0xc000aae3c0) (1) Data frame sent\nI0512 13:24:09.082946    2446 log.go:172] (0xc000b62d10) (0xc000aae3c0) Stream removed, broadcasting: 1\nI0512 13:24:09.082974    2446 log.go:172] (0xc000b62d10) Go away received\nI0512 13:24:09.083367    2446 log.go:172] (0xc000b62d10) (0xc000aae3c0) Stream removed, broadcasting: 1\nI0512 13:24:09.083390    2446 log.go:172] (0xc000b62d10) (0xc000996000) Stream removed, broadcasting: 3\nI0512 13:24:09.083404    2446 log.go:172] (0xc000b62d10) (0xc000996140) Stream removed, broadcasting: 5\n"
May 12 13:24:09.088: INFO: stdout: ""
May 12 13:24:09.088: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-525 execpodlt6qz -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.15 31087'
May 12 13:24:09.278: INFO: stderr: "I0512 13:24:09.194057    2465 log.go:172] (0xc0008e8000) (0xc0008c20a0) Create stream\nI0512 13:24:09.194100    2465 log.go:172] (0xc0008e8000) (0xc0008c20a0) Stream added, broadcasting: 1\nI0512 13:24:09.196508    2465 log.go:172] (0xc0008e8000) Reply frame received for 1\nI0512 13:24:09.196553    2465 log.go:172] (0xc0008e8000) (0xc0006ad2c0) Create stream\nI0512 13:24:09.196567    2465 log.go:172] (0xc0008e8000) (0xc0006ad2c0) Stream added, broadcasting: 3\nI0512 13:24:09.197514    2465 log.go:172] (0xc0008e8000) Reply frame received for 3\nI0512 13:24:09.197543    2465 log.go:172] (0xc0008e8000) (0xc0008c2140) Create stream\nI0512 13:24:09.197552    2465 log.go:172] (0xc0008e8000) (0xc0008c2140) Stream added, broadcasting: 5\nI0512 13:24:09.198229    2465 log.go:172] (0xc0008e8000) Reply frame received for 5\nI0512 13:24:09.264706    2465 log.go:172] (0xc0008e8000) Data frame received for 5\nI0512 13:24:09.264754    2465 log.go:172] (0xc0008c2140) (5) Data frame handling\nI0512 13:24:09.264777    2465 log.go:172] (0xc0008c2140) (5) Data frame sent\nI0512 13:24:09.264792    2465 log.go:172] (0xc0008e8000) Data frame received for 5\nI0512 13:24:09.264802    2465 log.go:172] (0xc0008c2140) (5) Data frame handling\nI0512 13:24:09.264816    2465 log.go:172] (0xc0008e8000) Data frame received for 3\nI0512 13:24:09.264842    2465 log.go:172] (0xc0006ad2c0) (3) Data frame handling\n+ nc -zv -t -w 2 172.17.0.15 31087\nConnection to 172.17.0.15 31087 port [tcp/31087] succeeded!\nI0512 13:24:09.275429    2465 log.go:172] (0xc0008e8000) Data frame received for 1\nI0512 13:24:09.275455    2465 log.go:172] (0xc0008c20a0) (1) Data frame handling\nI0512 13:24:09.275472    2465 log.go:172] (0xc0008c20a0) (1) Data frame sent\nI0512 13:24:09.275485    2465 log.go:172] (0xc0008e8000) (0xc0008c20a0) Stream removed, broadcasting: 1\nI0512 13:24:09.275506    2465 log.go:172] (0xc0008e8000) Go away received\nI0512 13:24:09.275682    2465 log.go:172] (0xc0008e8000) (0xc0008c20a0) Stream removed, broadcasting: 1\nI0512 13:24:09.275693    2465 log.go:172] (0xc0008e8000) (0xc0006ad2c0) Stream removed, broadcasting: 3\nI0512 13:24:09.275698    2465 log.go:172] (0xc0008e8000) (0xc0008c2140) Stream removed, broadcasting: 5\n"
May 12 13:24:09.278: INFO: stdout: ""
May 12 13:24:09.279: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-525 execpodlt6qz -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 31087'
May 12 13:24:09.533: INFO: stderr: "I0512 13:24:09.455224    2485 log.go:172] (0xc000931340) (0xc000afc460) Create stream\nI0512 13:24:09.455278    2485 log.go:172] (0xc000931340) (0xc000afc460) Stream added, broadcasting: 1\nI0512 13:24:09.458889    2485 log.go:172] (0xc000931340) Reply frame received for 1\nI0512 13:24:09.459049    2485 log.go:172] (0xc000931340) (0xc000afc500) Create stream\nI0512 13:24:09.459156    2485 log.go:172] (0xc000931340) (0xc000afc500) Stream added, broadcasting: 3\nI0512 13:24:09.460212    2485 log.go:172] (0xc000931340) Reply frame received for 3\nI0512 13:24:09.460249    2485 log.go:172] (0xc000931340) (0xc000afc5a0) Create stream\nI0512 13:24:09.460263    2485 log.go:172] (0xc000931340) (0xc000afc5a0) Stream added, broadcasting: 5\nI0512 13:24:09.461410    2485 log.go:172] (0xc000931340) Reply frame received for 5\nI0512 13:24:09.524933    2485 log.go:172] (0xc000931340) Data frame received for 5\nI0512 13:24:09.524967    2485 log.go:172] (0xc000afc5a0) (5) Data frame handling\nI0512 13:24:09.524979    2485 log.go:172] (0xc000afc5a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.18 31087\nConnection to 172.17.0.18 31087 port [tcp/31087] succeeded!\nI0512 13:24:09.525018    2485 log.go:172] (0xc000931340) Data frame received for 5\nI0512 13:24:09.525044    2485 log.go:172] (0xc000afc5a0) (5) Data frame handling\nI0512 13:24:09.525295    2485 log.go:172] (0xc000931340) Data frame received for 3\nI0512 13:24:09.525334    2485 log.go:172] (0xc000afc500) (3) Data frame handling\nI0512 13:24:09.528830    2485 log.go:172] (0xc000931340) Data frame received for 1\nI0512 13:24:09.528844    2485 log.go:172] (0xc000afc460) (1) Data frame handling\nI0512 13:24:09.528851    2485 log.go:172] (0xc000afc460) (1) Data frame sent\nI0512 13:24:09.528864    2485 log.go:172] (0xc000931340) (0xc000afc460) Stream removed, broadcasting: 1\nI0512 13:24:09.529518    2485 log.go:172] (0xc000931340) Go away received\nI0512 13:24:09.529850    2485 log.go:172] (0xc000931340) (0xc000afc460) Stream removed, broadcasting: 1\nI0512 13:24:09.529864    2485 log.go:172] (0xc000931340) (0xc000afc500) Stream removed, broadcasting: 3\nI0512 13:24:09.529871    2485 log.go:172] (0xc000931340) (0xc000afc5a0) Stream removed, broadcasting: 5\n"
May 12 13:24:09.533: INFO: stdout: ""
May 12 13:24:09.534: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:24:09.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-525" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:27.675 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":189,"skipped":3131,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:24:09.869: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-e5132c3c-d745-438c-bd22-9da77a4a4d30
STEP: Creating a pod to test consume configMaps
May 12 13:24:10.260: INFO: Waiting up to 5m0s for pod "pod-configmaps-48703d2a-faa4-4213-820b-d7d540e43a15" in namespace "configmap-9775" to be "Succeeded or Failed"
May 12 13:24:10.265: INFO: Pod "pod-configmaps-48703d2a-faa4-4213-820b-d7d540e43a15": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143296ms
May 12 13:24:12.267: INFO: Pod "pod-configmaps-48703d2a-faa4-4213-820b-d7d540e43a15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00687119s
May 12 13:24:14.295: INFO: Pod "pod-configmaps-48703d2a-faa4-4213-820b-d7d540e43a15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035083164s
STEP: Saw pod success
May 12 13:24:14.296: INFO: Pod "pod-configmaps-48703d2a-faa4-4213-820b-d7d540e43a15" satisfied condition "Succeeded or Failed"
May 12 13:24:14.299: INFO: Trying to get logs from node kali-worker pod pod-configmaps-48703d2a-faa4-4213-820b-d7d540e43a15 container configmap-volume-test: 
STEP: delete the pod
May 12 13:24:14.402: INFO: Waiting for pod pod-configmaps-48703d2a-faa4-4213-820b-d7d540e43a15 to disappear
May 12 13:24:14.415: INFO: Pod pod-configmaps-48703d2a-faa4-4213-820b-d7d540e43a15 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:24:14.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9775" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":190,"skipped":3153,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:24:14.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
May 12 13:24:14.536: INFO: >>> kubeConfig: /root/.kube/config
May 12 13:24:17.517: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:24:29.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4741" for this suite.

• [SLOW TEST:15.021 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":191,"skipped":3154,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:24:29.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 12 13:24:32.354: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 12 13:24:34.406: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724886672, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724886672, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724886673, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724886672, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 12 13:24:36.410: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724886672, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724886672, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724886673, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724886672, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 12 13:24:39.458: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 12 13:24:39.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-216-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:24:41.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4705" for this suite.
STEP: Destroying namespace "webhook-4705-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.972 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":192,"skipped":3162,"failed":0}
SSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:24:41.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
May 12 13:24:43.058: INFO: Pod name wrapped-volume-race-732005a3-385e-4c43-ae5f-9426aa114270: Found 0 pods out of 5
May 12 13:24:48.085: INFO: Pod name wrapped-volume-race-732005a3-385e-4c43-ae5f-9426aa114270: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-732005a3-385e-4c43-ae5f-9426aa114270 in namespace emptydir-wrapper-3749, will wait for the garbage collector to delete the pods
May 12 13:25:08.748: INFO: Deleting ReplicationController wrapped-volume-race-732005a3-385e-4c43-ae5f-9426aa114270 took: 309.358147ms
May 12 13:25:09.148: INFO: Terminating ReplicationController wrapped-volume-race-732005a3-385e-4c43-ae5f-9426aa114270 pods took: 400.175876ms
STEP: Creating RC which spawns configmap-volume pods
May 12 13:25:24.018: INFO: Pod name wrapped-volume-race-0aa3abfe-c8c2-4ce8-9ea2-83476c727eb0: Found 0 pods out of 5
May 12 13:25:29.026: INFO: Pod name wrapped-volume-race-0aa3abfe-c8c2-4ce8-9ea2-83476c727eb0: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-0aa3abfe-c8c2-4ce8-9ea2-83476c727eb0 in namespace emptydir-wrapper-3749, will wait for the garbage collector to delete the pods
May 12 13:25:44.213: INFO: Deleting ReplicationController wrapped-volume-race-0aa3abfe-c8c2-4ce8-9ea2-83476c727eb0 took: 33.641047ms
May 12 13:25:44.814: INFO: Terminating ReplicationController wrapped-volume-race-0aa3abfe-c8c2-4ce8-9ea2-83476c727eb0 pods took: 600.2148ms
STEP: Creating RC which spawns configmap-volume pods
May 12 13:25:54.921: INFO: Pod name wrapped-volume-race-6d1b2c01-8af8-4fc0-9bef-0025c4f8315f: Found 0 pods out of 5
May 12 13:25:59.928: INFO: Pod name wrapped-volume-race-6d1b2c01-8af8-4fc0-9bef-0025c4f8315f: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-6d1b2c01-8af8-4fc0-9bef-0025c4f8315f in namespace emptydir-wrapper-3749, will wait for the garbage collector to delete the pods
May 12 13:26:18.008: INFO: Deleting ReplicationController wrapped-volume-race-6d1b2c01-8af8-4fc0-9bef-0025c4f8315f took: 7.903405ms
May 12 13:26:18.508: INFO: Terminating ReplicationController wrapped-volume-race-6d1b2c01-8af8-4fc0-9bef-0025c4f8315f pods took: 500.203431ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:26:38.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-3749" for this suite.

• [SLOW TEST:117.582 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":193,"skipped":3166,"failed":0}
SSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:26:38.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
May 12 13:26:44.254: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:26:44.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-5340" for this suite.

• [SLOW TEST:5.410 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":194,"skipped":3171,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:26:44.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-e777807e-80b3-4197-bf6d-45b48394b243
STEP: Creating a pod to test consume configMaps
May 12 13:26:45.191: INFO: Waiting up to 5m0s for pod "pod-configmaps-c8f2474c-418d-4be1-907b-92e17f6aeaaf" in namespace "configmap-2605" to be "Succeeded or Failed"
May 12 13:26:45.242: INFO: Pod "pod-configmaps-c8f2474c-418d-4be1-907b-92e17f6aeaaf": Phase="Pending", Reason="", readiness=false. Elapsed: 51.216371ms
May 12 13:26:47.370: INFO: Pod "pod-configmaps-c8f2474c-418d-4be1-907b-92e17f6aeaaf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.17900935s
May 12 13:26:49.413: INFO: Pod "pod-configmaps-c8f2474c-418d-4be1-907b-92e17f6aeaaf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.22278354s
May 12 13:26:51.563: INFO: Pod "pod-configmaps-c8f2474c-418d-4be1-907b-92e17f6aeaaf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.372055768s
May 12 13:26:54.056: INFO: Pod "pod-configmaps-c8f2474c-418d-4be1-907b-92e17f6aeaaf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.865862995s
May 12 13:26:56.658: INFO: Pod "pod-configmaps-c8f2474c-418d-4be1-907b-92e17f6aeaaf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.467131158s
STEP: Saw pod success
May 12 13:26:56.658: INFO: Pod "pod-configmaps-c8f2474c-418d-4be1-907b-92e17f6aeaaf" satisfied condition "Succeeded or Failed"
May 12 13:26:56.731: INFO: Trying to get logs from node kali-worker pod pod-configmaps-c8f2474c-418d-4be1-907b-92e17f6aeaaf container configmap-volume-test: 
STEP: delete the pod
May 12 13:26:57.395: INFO: Waiting for pod pod-configmaps-c8f2474c-418d-4be1-907b-92e17f6aeaaf to disappear
May 12 13:26:57.450: INFO: Pod pod-configmaps-c8f2474c-418d-4be1-907b-92e17f6aeaaf no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:26:57.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2605" for this suite.

• [SLOW TEST:13.161 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":195,"skipped":3208,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:26:57.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-1c4ed98c-4f32-4452-a307-d949a9941a73
STEP: Creating a pod to test consume secrets
May 12 13:26:57.882: INFO: Waiting up to 5m0s for pod "pod-secrets-aba3db41-33e3-4715-a1fc-4e28fe487413" in namespace "secrets-8508" to be "Succeeded or Failed"
May 12 13:26:57.923: INFO: Pod "pod-secrets-aba3db41-33e3-4715-a1fc-4e28fe487413": Phase="Pending", Reason="", readiness=false. Elapsed: 41.058347ms
May 12 13:27:00.092: INFO: Pod "pod-secrets-aba3db41-33e3-4715-a1fc-4e28fe487413": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209927246s
May 12 13:27:02.136: INFO: Pod "pod-secrets-aba3db41-33e3-4715-a1fc-4e28fe487413": Phase="Running", Reason="", readiness=true. Elapsed: 4.253849102s
May 12 13:27:04.140: INFO: Pod "pod-secrets-aba3db41-33e3-4715-a1fc-4e28fe487413": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.257848182s
STEP: Saw pod success
May 12 13:27:04.140: INFO: Pod "pod-secrets-aba3db41-33e3-4715-a1fc-4e28fe487413" satisfied condition "Succeeded or Failed"
May 12 13:27:04.143: INFO: Trying to get logs from node kali-worker pod pod-secrets-aba3db41-33e3-4715-a1fc-4e28fe487413 container secret-volume-test: 
STEP: delete the pod
May 12 13:27:04.349: INFO: Waiting for pod pod-secrets-aba3db41-33e3-4715-a1fc-4e28fe487413 to disappear
May 12 13:27:04.360: INFO: Pod pod-secrets-aba3db41-33e3-4715-a1fc-4e28fe487413 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:27:04.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8508" for this suite.

• [SLOW TEST:6.797 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":196,"skipped":3223,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:27:04.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-4716/configmap-test-b4c37eeb-5f0a-45dd-93ee-caef1ca87388
STEP: Creating a pod to test consume configMaps
May 12 13:27:04.617: INFO: Waiting up to 5m0s for pod "pod-configmaps-0647e92a-caa8-4caf-84de-afd69ea6107f" in namespace "configmap-4716" to be "Succeeded or Failed"
May 12 13:27:04.631: INFO: Pod "pod-configmaps-0647e92a-caa8-4caf-84de-afd69ea6107f": Phase="Pending", Reason="", readiness=false. Elapsed: 13.494976ms
May 12 13:27:06.634: INFO: Pod "pod-configmaps-0647e92a-caa8-4caf-84de-afd69ea6107f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016670104s
May 12 13:27:08.646: INFO: Pod "pod-configmaps-0647e92a-caa8-4caf-84de-afd69ea6107f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028591935s
STEP: Saw pod success
May 12 13:27:08.646: INFO: Pod "pod-configmaps-0647e92a-caa8-4caf-84de-afd69ea6107f" satisfied condition "Succeeded or Failed"
May 12 13:27:08.649: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-0647e92a-caa8-4caf-84de-afd69ea6107f container env-test: 
STEP: delete the pod
May 12 13:27:08.704: INFO: Waiting for pod pod-configmaps-0647e92a-caa8-4caf-84de-afd69ea6107f to disappear
May 12 13:27:08.733: INFO: Pod pod-configmaps-0647e92a-caa8-4caf-84de-afd69ea6107f no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:27:08.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4716" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":197,"skipped":3250,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:27:08.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-92453efc-a983-4228-8203-d67e25ea734f
STEP: Creating a pod to test consume configMaps
May 12 13:27:08.928: INFO: Waiting up to 5m0s for pod "pod-configmaps-131c5698-02c7-40d5-98b4-4042fbe3b56d" in namespace "configmap-9612" to be "Succeeded or Failed"
May 12 13:27:08.936: INFO: Pod "pod-configmaps-131c5698-02c7-40d5-98b4-4042fbe3b56d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.336715ms
May 12 13:27:10.941: INFO: Pod "pod-configmaps-131c5698-02c7-40d5-98b4-4042fbe3b56d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012812878s
May 12 13:27:12.981: INFO: Pod "pod-configmaps-131c5698-02c7-40d5-98b4-4042fbe3b56d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05281098s
May 12 13:27:14.984: INFO: Pod "pod-configmaps-131c5698-02c7-40d5-98b4-4042fbe3b56d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.056296938s
STEP: Saw pod success
May 12 13:27:14.984: INFO: Pod "pod-configmaps-131c5698-02c7-40d5-98b4-4042fbe3b56d" satisfied condition "Succeeded or Failed"
May 12 13:27:14.987: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-131c5698-02c7-40d5-98b4-4042fbe3b56d container configmap-volume-test: 
STEP: delete the pod
May 12 13:27:15.311: INFO: Waiting for pod pod-configmaps-131c5698-02c7-40d5-98b4-4042fbe3b56d to disappear
May 12 13:27:15.344: INFO: Pod pod-configmaps-131c5698-02c7-40d5-98b4-4042fbe3b56d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:27:15.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9612" for this suite.

• [SLOW TEST:6.570 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":198,"skipped":3273,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:27:15.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 12 13:27:16.423: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 12 13:27:18.431: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724886836, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724886836, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724886836, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724886836, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 12 13:27:20.438: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724886836, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724886836, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724886836, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724886836, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 12 13:27:23.639: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
May 12 13:27:27.935: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config attach --namespace=webhook-3428 to-be-attached-pod -i -c=container1'
May 12 13:27:28.267: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:27:28.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3428" for this suite.
STEP: Destroying namespace "webhook-3428-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.058 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":199,"skipped":3282,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:27:28.412: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:27:33.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-700" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":200,"skipped":3287,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:27:33.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-8542
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating stateful set ss in namespace statefulset-8542
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8542
May 12 13:27:33.301: INFO: Found 0 stateful pods, waiting for 1
May 12 13:27:43.306: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
May 12 13:27:43.310: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May 12 13:27:43.540: INFO: stderr: "I0512 13:27:43.439708    2526 log.go:172] (0xc000b194a0) (0xc000b006e0) Create stream\nI0512 13:27:43.439753    2526 log.go:172] (0xc000b194a0) (0xc000b006e0) Stream added, broadcasting: 1\nI0512 13:27:43.442031    2526 log.go:172] (0xc000b194a0) Reply frame received for 1\nI0512 13:27:43.442096    2526 log.go:172] (0xc000b194a0) (0xc000b00780) Create stream\nI0512 13:27:43.442114    2526 log.go:172] (0xc000b194a0) (0xc000b00780) Stream added, broadcasting: 3\nI0512 13:27:43.443262    2526 log.go:172] (0xc000b194a0) Reply frame received for 3\nI0512 13:27:43.443310    2526 log.go:172] (0xc000b194a0) (0xc000b00820) Create stream\nI0512 13:27:43.443328    2526 log.go:172] (0xc000b194a0) (0xc000b00820) Stream added, broadcasting: 5\nI0512 13:27:43.444455    2526 log.go:172] (0xc000b194a0) Reply frame received for 5\nI0512 13:27:43.505568    2526 log.go:172] (0xc000b194a0) Data frame received for 5\nI0512 13:27:43.505599    2526 log.go:172] (0xc000b00820) (5) Data frame handling\nI0512 13:27:43.505617    2526 log.go:172] (0xc000b00820) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 13:27:43.531384    2526 log.go:172] (0xc000b194a0) Data frame received for 3\nI0512 13:27:43.531422    2526 log.go:172] (0xc000b00780) (3) Data frame handling\nI0512 13:27:43.531471    2526 log.go:172] (0xc000b00780) (3) Data frame sent\nI0512 13:27:43.531659    2526 log.go:172] (0xc000b194a0) Data frame received for 3\nI0512 13:27:43.531684    2526 log.go:172] (0xc000b00780) (3) Data frame handling\nI0512 13:27:43.531725    2526 log.go:172] (0xc000b194a0) Data frame received for 5\nI0512 13:27:43.531756    2526 log.go:172] (0xc000b00820) (5) Data frame handling\nI0512 13:27:43.534213    2526 log.go:172] (0xc000b194a0) Data frame received for 1\nI0512 13:27:43.534248    2526 log.go:172] (0xc000b006e0) (1) Data frame handling\nI0512 13:27:43.534270    2526 log.go:172] (0xc000b006e0) (1) Data frame sent\nI0512 13:27:43.534303    2526 log.go:172] (0xc000b194a0) (0xc000b006e0) Stream removed, broadcasting: 1\nI0512 13:27:43.534327    2526 log.go:172] (0xc000b194a0) Go away received\nI0512 13:27:43.534844    2526 log.go:172] (0xc000b194a0) (0xc000b006e0) Stream removed, broadcasting: 1\nI0512 13:27:43.534867    2526 log.go:172] (0xc000b194a0) (0xc000b00780) Stream removed, broadcasting: 3\nI0512 13:27:43.534880    2526 log.go:172] (0xc000b194a0) (0xc000b00820) Stream removed, broadcasting: 5\n"
May 12 13:27:43.540: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May 12 13:27:43.540: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May 12 13:27:43.545: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
May 12 13:27:53.548: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
May 12 13:27:53.548: INFO: Waiting for statefulset status.replicas updated to 0
May 12 13:27:53.632: INFO: POD   NODE         PHASE    GRACE  CONDITIONS
May 12 13:27:53.632: INFO: ss-0  kali-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:33 +0000 UTC  }]
May 12 13:27:53.632: INFO: 
May 12 13:27:53.632: INFO: StatefulSet ss has not reached scale 3, at 1
May 12 13:27:54.646: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.923400558s
May 12 13:27:55.682: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.909451151s
May 12 13:27:56.886: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.87293911s
May 12 13:27:57.899: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.669268865s
May 12 13:27:59.150: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.65612154s
May 12 13:28:00.154: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.40477803s
May 12 13:28:01.159: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.400877788s
May 12 13:28:02.164: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.396348481s
May 12 13:28:03.215: INFO: Verifying statefulset ss doesn't scale past 3 for another 391.571881ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8542
May 12 13:28:04.251: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:28:05.999: INFO: stderr: "I0512 13:28:05.943088    2546 log.go:172] (0xc000b56000) (0xc0005b6000) Create stream\nI0512 13:28:05.943117    2546 log.go:172] (0xc000b56000) (0xc0005b6000) Stream added, broadcasting: 1\nI0512 13:28:05.944739    2546 log.go:172] (0xc000b56000) Reply frame received for 1\nI0512 13:28:05.944761    2546 log.go:172] (0xc000b56000) (0xc00067a000) Create stream\nI0512 13:28:05.944768    2546 log.go:172] (0xc000b56000) (0xc00067a000) Stream added, broadcasting: 3\nI0512 13:28:05.945550    2546 log.go:172] (0xc000b56000) Reply frame received for 3\nI0512 13:28:05.945565    2546 log.go:172] (0xc000b56000) (0xc00067a0a0) Create stream\nI0512 13:28:05.945571    2546 log.go:172] (0xc000b56000) (0xc00067a0a0) Stream added, broadcasting: 5\nI0512 13:28:05.946096    2546 log.go:172] (0xc000b56000) Reply frame received for 5\nI0512 13:28:05.992607    2546 log.go:172] (0xc000b56000) Data frame received for 5\nI0512 13:28:05.992640    2546 log.go:172] (0xc00067a0a0) (5) Data frame handling\nI0512 13:28:05.992657    2546 log.go:172] (0xc00067a0a0) (5) Data frame sent\nI0512 13:28:05.992665    2546 log.go:172] (0xc000b56000) Data frame received for 5\nI0512 13:28:05.992671    2546 log.go:172] (0xc00067a0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0512 13:28:05.992797    2546 log.go:172] (0xc000b56000) Data frame received for 3\nI0512 13:28:05.992817    2546 log.go:172] (0xc00067a000) (3) Data frame handling\nI0512 13:28:05.992838    2546 log.go:172] (0xc00067a000) (3) Data frame sent\nI0512 13:28:05.992888    2546 log.go:172] (0xc000b56000) Data frame received for 3\nI0512 13:28:05.992913    2546 log.go:172] (0xc00067a000) (3) Data frame handling\nI0512 13:28:05.994619    2546 log.go:172] (0xc000b56000) Data frame received for 1\nI0512 13:28:05.994632    2546 log.go:172] (0xc0005b6000) (1) Data frame handling\nI0512 13:28:05.994644    2546 log.go:172] (0xc0005b6000) (1) Data frame sent\nI0512 13:28:05.994752    2546 log.go:172] (0xc000b56000) (0xc0005b6000) Stream removed, broadcasting: 1\nI0512 13:28:05.994919    2546 log.go:172] (0xc000b56000) Go away received\nI0512 13:28:05.995109    2546 log.go:172] (0xc000b56000) (0xc0005b6000) Stream removed, broadcasting: 1\nI0512 13:28:05.995123    2546 log.go:172] (0xc000b56000) (0xc00067a000) Stream removed, broadcasting: 3\nI0512 13:28:05.995131    2546 log.go:172] (0xc000b56000) (0xc00067a0a0) Stream removed, broadcasting: 5\n"
May 12 13:28:06.000: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May 12 13:28:06.000: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May 12 13:28:06.000: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:28:07.553: INFO: stderr: "I0512 13:28:07.479317    2579 log.go:172] (0xc0000eb080) (0xc0006e14a0) Create stream\nI0512 13:28:07.479347    2579 log.go:172] (0xc0000eb080) (0xc0006e14a0) Stream added, broadcasting: 1\nI0512 13:28:07.481261    2579 log.go:172] (0xc0000eb080) Reply frame received for 1\nI0512 13:28:07.481290    2579 log.go:172] (0xc0000eb080) (0xc0006e1540) Create stream\nI0512 13:28:07.481302    2579 log.go:172] (0xc0000eb080) (0xc0006e1540) Stream added, broadcasting: 3\nI0512 13:28:07.482408    2579 log.go:172] (0xc0000eb080) Reply frame received for 3\nI0512 13:28:07.482434    2579 log.go:172] (0xc0000eb080) (0xc0006e15e0) Create stream\nI0512 13:28:07.482445    2579 log.go:172] (0xc0000eb080) (0xc0006e15e0) Stream added, broadcasting: 5\nI0512 13:28:07.483181    2579 log.go:172] (0xc0000eb080) Reply frame received for 5\nI0512 13:28:07.549051    2579 log.go:172] (0xc0000eb080) Data frame received for 3\nI0512 13:28:07.549081    2579 log.go:172] (0xc0006e1540) (3) Data frame handling\nI0512 13:28:07.549091    2579 log.go:172] (0xc0006e1540) (3) Data frame sent\nI0512 13:28:07.549097    2579 log.go:172] (0xc0000eb080) Data frame received for 3\nI0512 13:28:07.549103    2579 log.go:172] (0xc0006e1540) (3) Data frame handling\nI0512 13:28:07.549244    2579 log.go:172] (0xc0000eb080) Data frame received for 5\nI0512 13:28:07.549254    2579 log.go:172] (0xc0006e15e0) (5) Data frame handling\nI0512 13:28:07.549261    2579 log.go:172] (0xc0006e15e0) (5) Data frame sent\nI0512 13:28:07.549267    2579 log.go:172] (0xc0000eb080) Data frame received for 5\nI0512 13:28:07.549273    2579 log.go:172] (0xc0006e15e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0512 13:28:07.550174    2579 log.go:172] (0xc0000eb080) Data frame received for 1\nI0512 13:28:07.550213    2579 log.go:172] (0xc0006e14a0) (1) Data frame handling\nI0512 13:28:07.550222    2579 log.go:172] (0xc0006e14a0) (1) Data frame sent\nI0512 13:28:07.550230    2579 log.go:172] (0xc0000eb080) (0xc0006e14a0) Stream removed, broadcasting: 1\nI0512 13:28:07.550238    2579 log.go:172] (0xc0000eb080) Go away received\nI0512 13:28:07.550436    2579 log.go:172] (0xc0000eb080) (0xc0006e14a0) Stream removed, broadcasting: 1\nI0512 13:28:07.550446    2579 log.go:172] (0xc0000eb080) (0xc0006e1540) Stream removed, broadcasting: 3\nI0512 13:28:07.550451    2579 log.go:172] (0xc0000eb080) (0xc0006e15e0) Stream removed, broadcasting: 5\n"
May 12 13:28:07.553: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May 12 13:28:07.553: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May 12 13:28:07.553: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:28:07.791: INFO: stderr: "I0512 13:28:07.726110    2600 log.go:172] (0xc000b79340) (0xc0009e46e0) Create stream\nI0512 13:28:07.726135    2600 log.go:172] (0xc000b79340) (0xc0009e46e0) Stream added, broadcasting: 1\nI0512 13:28:07.728922    2600 log.go:172] (0xc000b79340) Reply frame received for 1\nI0512 13:28:07.728947    2600 log.go:172] (0xc000b79340) (0xc0009e4000) Create stream\nI0512 13:28:07.728955    2600 log.go:172] (0xc000b79340) (0xc0009e4000) Stream added, broadcasting: 3\nI0512 13:28:07.729708    2600 log.go:172] (0xc000b79340) Reply frame received for 3\nI0512 13:28:07.729734    2600 log.go:172] (0xc000b79340) (0xc0007e3680) Create stream\nI0512 13:28:07.729742    2600 log.go:172] (0xc000b79340) (0xc0007e3680) Stream added, broadcasting: 5\nI0512 13:28:07.730391    2600 log.go:172] (0xc000b79340) Reply frame received for 5\nI0512 13:28:07.777887    2600 log.go:172] (0xc000b79340) Data frame received for 5\nI0512 13:28:07.777914    2600 log.go:172] (0xc0007e3680) (5) Data frame handling\nI0512 13:28:07.777924    2600 log.go:172] (0xc0007e3680) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0512 13:28:07.777945    2600 log.go:172] (0xc000b79340) Data frame received for 3\nI0512 13:28:07.777984    2600 log.go:172] (0xc0009e4000) (3) Data frame handling\nI0512 13:28:07.778005    2600 log.go:172] (0xc0009e4000) (3) Data frame sent\nI0512 13:28:07.778021    2600 log.go:172] (0xc000b79340) Data frame received for 3\nI0512 13:28:07.778034    2600 log.go:172] (0xc0009e4000) (3) Data frame handling\nI0512 13:28:07.778070    2600 log.go:172] (0xc000b79340) Data frame received for 5\nI0512 13:28:07.778084    2600 log.go:172] (0xc0007e3680) (5) Data frame handling\nI0512 13:28:07.787913    2600 log.go:172] (0xc000b79340) Data frame received for 1\nI0512 13:28:07.787931    2600 log.go:172] (0xc0009e46e0) (1) Data frame handling\nI0512 13:28:07.787940    2600 log.go:172] (0xc0009e46e0) (1) Data frame sent\nI0512 13:28:07.787949    2600 log.go:172] (0xc000b79340) (0xc0009e46e0) Stream removed, broadcasting: 1\nI0512 13:28:07.788045    2600 log.go:172] (0xc000b79340) Go away received\nI0512 13:28:07.788167    2600 log.go:172] (0xc000b79340) (0xc0009e46e0) Stream removed, broadcasting: 1\nI0512 13:28:07.788175    2600 log.go:172] (0xc000b79340) (0xc0009e4000) Stream removed, broadcasting: 3\nI0512 13:28:07.788180    2600 log.go:172] (0xc000b79340) (0xc0007e3680) Stream removed, broadcasting: 5\n"
May 12 13:28:07.791: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May 12 13:28:07.791: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May 12 13:28:07.794: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
May 12 13:28:07.794: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
May 12 13:28:07.794: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
May 12 13:28:07.796: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May 12 13:28:08.014: INFO: stderr: "I0512 13:28:07.940535    2619 log.go:172] (0xc000a4c000) (0xc0006f0be0) Create stream\nI0512 13:28:07.940581    2619 log.go:172] (0xc000a4c000) (0xc0006f0be0) Stream added, broadcasting: 1\nI0512 13:28:07.942472    2619 log.go:172] (0xc000a4c000) Reply frame received for 1\nI0512 13:28:07.942514    2619 log.go:172] (0xc000a4c000) (0xc0006f0dc0) Create stream\nI0512 13:28:07.942532    2619 log.go:172] (0xc000a4c000) (0xc0006f0dc0) Stream added, broadcasting: 3\nI0512 13:28:07.943350    2619 log.go:172] (0xc000a4c000) Reply frame received for 3\nI0512 13:28:07.943375    2619 log.go:172] (0xc000a4c000) (0xc000894320) Create stream\nI0512 13:28:07.943383    2619 log.go:172] (0xc000a4c000) (0xc000894320) Stream added, broadcasting: 5\nI0512 13:28:07.944093    2619 log.go:172] (0xc000a4c000) Reply frame received for 5\nI0512 13:28:08.009450    2619 log.go:172] (0xc000a4c000) Data frame received for 3\nI0512 13:28:08.009477    2619 log.go:172] (0xc0006f0dc0) (3) Data frame handling\nI0512 13:28:08.009486    2619 log.go:172] (0xc0006f0dc0) (3) Data frame sent\nI0512 13:28:08.009492    2619 log.go:172] (0xc000a4c000) Data frame received for 3\nI0512 13:28:08.009498    2619 log.go:172] (0xc0006f0dc0) (3) Data frame handling\nI0512 13:28:08.009741    2619 log.go:172] (0xc000a4c000) Data frame received for 5\nI0512 13:28:08.009751    2619 log.go:172] (0xc000894320) (5) Data frame handling\nI0512 13:28:08.009758    2619 log.go:172] (0xc000894320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 13:28:08.009875    2619 log.go:172] (0xc000a4c000) Data frame received for 5\nI0512 13:28:08.009911    2619 log.go:172] (0xc000894320) (5) Data frame handling\nI0512 13:28:08.011048    2619 log.go:172] (0xc000a4c000) Data frame received for 1\nI0512 13:28:08.011063    2619 log.go:172] (0xc0006f0be0) (1) Data frame handling\nI0512 13:28:08.011072    2619 log.go:172] (0xc0006f0be0) (1) Data frame sent\nI0512 13:28:08.011262    2619 log.go:172] (0xc000a4c000) (0xc0006f0be0) Stream removed, broadcasting: 1\nI0512 13:28:08.011313    2619 log.go:172] (0xc000a4c000) Go away received\nI0512 13:28:08.011475    2619 log.go:172] (0xc000a4c000) (0xc0006f0be0) Stream removed, broadcasting: 1\nI0512 13:28:08.011485    2619 log.go:172] (0xc000a4c000) (0xc0006f0dc0) Stream removed, broadcasting: 3\nI0512 13:28:08.011491    2619 log.go:172] (0xc000a4c000) (0xc000894320) Stream removed, broadcasting: 5\n"
May 12 13:28:08.014: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May 12 13:28:08.014: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May 12 13:28:08.015: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May 12 13:28:08.293: INFO: stderr: "I0512 13:28:08.196253    2639 log.go:172] (0xc000aaa0b0) (0xc000bd4280) Create stream\nI0512 13:28:08.196283    2639 log.go:172] (0xc000aaa0b0) (0xc000bd4280) Stream added, broadcasting: 1\nI0512 13:28:08.198578    2639 log.go:172] (0xc000aaa0b0) Reply frame received for 1\nI0512 13:28:08.198598    2639 log.go:172] (0xc000aaa0b0) (0xc0002cc1e0) Create stream\nI0512 13:28:08.198605    2639 log.go:172] (0xc000aaa0b0) (0xc0002cc1e0) Stream added, broadcasting: 3\nI0512 13:28:08.199317    2639 log.go:172] (0xc000aaa0b0) Reply frame received for 3\nI0512 13:28:08.199366    2639 log.go:172] (0xc000aaa0b0) (0xc000a100a0) Create stream\nI0512 13:28:08.199382    2639 log.go:172] (0xc000aaa0b0) (0xc000a100a0) Stream added, broadcasting: 5\nI0512 13:28:08.200032    2639 log.go:172] (0xc000aaa0b0) Reply frame received for 5\nI0512 13:28:08.255203    2639 log.go:172] (0xc000aaa0b0) Data frame received for 5\nI0512 13:28:08.255229    2639 log.go:172] (0xc000a100a0) (5) Data frame handling\nI0512 13:28:08.255255    2639 log.go:172] (0xc000a100a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 13:28:08.287261    2639 log.go:172] (0xc000aaa0b0) Data frame received for 5\nI0512 13:28:08.287287    2639 log.go:172] (0xc000a100a0) (5) Data frame handling\nI0512 13:28:08.287304    2639 log.go:172] (0xc000aaa0b0) Data frame received for 3\nI0512 13:28:08.287311    2639 log.go:172] (0xc0002cc1e0) (3) Data frame handling\nI0512 13:28:08.287324    2639 log.go:172] (0xc0002cc1e0) (3) Data frame sent\nI0512 13:28:08.287333    2639 log.go:172] (0xc000aaa0b0) Data frame received for 3\nI0512 13:28:08.287341    2639 log.go:172] (0xc0002cc1e0) (3) Data frame handling\nI0512 13:28:08.288570    2639 log.go:172] (0xc000aaa0b0) Data frame received for 1\nI0512 13:28:08.288581    2639 log.go:172] (0xc000bd4280) (1) Data frame handling\nI0512 13:28:08.288586    2639 log.go:172] (0xc000bd4280) (1) Data frame sent\nI0512 13:28:08.288765    2639 log.go:172] (0xc000aaa0b0) (0xc000bd4280) Stream removed, broadcasting: 1\nI0512 13:28:08.288920    2639 log.go:172] (0xc000aaa0b0) Go away received\nI0512 13:28:08.289336    2639 log.go:172] (0xc000aaa0b0) (0xc000bd4280) Stream removed, broadcasting: 1\nI0512 13:28:08.289362    2639 log.go:172] (0xc000aaa0b0) (0xc0002cc1e0) Stream removed, broadcasting: 3\nI0512 13:28:08.289375    2639 log.go:172] (0xc000aaa0b0) (0xc000a100a0) Stream removed, broadcasting: 5\n"
May 12 13:28:08.293: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May 12 13:28:08.293: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May 12 13:28:08.293: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May 12 13:28:08.955: INFO: stderr: "I0512 13:28:08.826503    2661 log.go:172] (0xc0005560b0) (0xc000932000) Create stream\nI0512 13:28:08.826539    2661 log.go:172] (0xc0005560b0) (0xc000932000) Stream added, broadcasting: 1\nI0512 13:28:08.827959    2661 log.go:172] (0xc0005560b0) Reply frame received for 1\nI0512 13:28:08.827991    2661 log.go:172] (0xc0005560b0) (0xc000a56000) Create stream\nI0512 13:28:08.828001    2661 log.go:172] (0xc0005560b0) (0xc000a56000) Stream added, broadcasting: 3\nI0512 13:28:08.828905    2661 log.go:172] (0xc0005560b0) Reply frame received for 3\nI0512 13:28:08.828945    2661 log.go:172] (0xc0005560b0) (0xc0009320a0) Create stream\nI0512 13:28:08.828958    2661 log.go:172] (0xc0005560b0) (0xc0009320a0) Stream added, broadcasting: 5\nI0512 13:28:08.830092    2661 log.go:172] (0xc0005560b0) Reply frame received for 5\nI0512 13:28:08.876400    2661 log.go:172] (0xc0005560b0) Data frame received for 5\nI0512 13:28:08.876428    2661 log.go:172] (0xc0009320a0) (5) Data frame handling\nI0512 13:28:08.876445    2661 log.go:172] (0xc0009320a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 13:28:08.948227    2661 log.go:172] (0xc0005560b0) Data frame received for 3\nI0512 13:28:08.948258    2661 log.go:172] (0xc000a56000) (3) Data frame handling\nI0512 13:28:08.948277    2661 log.go:172] (0xc000a56000) (3) Data frame sent\nI0512 13:28:08.948286    2661 log.go:172] (0xc0005560b0) Data frame received for 3\nI0512 13:28:08.948294    2661 log.go:172] (0xc000a56000) (3) Data frame handling\nI0512 13:28:08.948329    2661 log.go:172] (0xc0005560b0) Data frame received for 5\nI0512 13:28:08.948346    2661 log.go:172] (0xc0009320a0) (5) Data frame handling\nI0512 13:28:08.950065    2661 log.go:172] (0xc0005560b0) Data frame received for 1\nI0512 13:28:08.950081    2661 log.go:172] (0xc000932000) (1) Data frame handling\nI0512 13:28:08.950091    2661 log.go:172] (0xc000932000) (1) Data frame sent\nI0512 13:28:08.950104    2661 log.go:172] (0xc0005560b0) (0xc000932000) Stream removed, broadcasting: 1\nI0512 13:28:08.950310    2661 log.go:172] (0xc0005560b0) Go away received\nI0512 13:28:08.950340    2661 log.go:172] (0xc0005560b0) (0xc000932000) Stream removed, broadcasting: 1\nI0512 13:28:08.950353    2661 log.go:172] (0xc0005560b0) (0xc000a56000) Stream removed, broadcasting: 3\nI0512 13:28:08.950362    2661 log.go:172] (0xc0005560b0) (0xc0009320a0) Stream removed, broadcasting: 5\n"
May 12 13:28:08.955: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May 12 13:28:08.955: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May 12 13:28:08.955: INFO: Waiting for statefulset status.replicas updated to 0
May 12 13:28:08.987: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
May 12 13:28:18.994: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
May 12 13:28:18.995: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
May 12 13:28:18.995: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
May 12 13:28:19.023: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May 12 13:28:19.024: INFO: ss-0  kali-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:33 +0000 UTC  }]
May 12 13:28:19.024: INFO: ss-1  kali-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:53 +0000 UTC  }]
May 12 13:28:19.024: INFO: ss-2  kali-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:53 +0000 UTC  }]
May 12 13:28:19.024: INFO: 
May 12 13:28:19.024: INFO: StatefulSet ss has not reached scale 0, at 3
May 12 13:28:20.027: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May 12 13:28:20.027: INFO: ss-0  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:33 +0000 UTC  }]
May 12 13:28:20.027: INFO: ss-1  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:53 +0000 UTC  }]
May 12 13:28:20.027: INFO: ss-2  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:53 +0000 UTC  }]
May 12 13:28:20.027: INFO: 
May 12 13:28:20.027: INFO: StatefulSet ss has not reached scale 0, at 3
May 12 13:28:21.066: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May 12 13:28:21.066: INFO: ss-0  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:33 +0000 UTC  }]
May 12 13:28:21.066: INFO: ss-1  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:53 +0000 UTC  }]
May 12 13:28:21.066: INFO: ss-2  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:53 +0000 UTC  }]
May 12 13:28:21.066: INFO: 
May 12 13:28:21.066: INFO: StatefulSet ss has not reached scale 0, at 3
May 12 13:28:22.590: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May 12 13:28:22.590: INFO: ss-0  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:33 +0000 UTC  }]
May 12 13:28:22.590: INFO: ss-1  kali-worker2  Running  0s     [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:53 +0000 UTC  }]
May 12 13:28:22.590: INFO: 
May 12 13:28:22.590: INFO: StatefulSet ss has not reached scale 0, at 2
May 12 13:28:23.595: INFO: POD   NODE         PHASE    GRACE  CONDITIONS
May 12 13:28:23.595: INFO: ss-0  kali-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:33 +0000 UTC  }]
May 12 13:28:23.595: INFO: 
May 12 13:28:23.595: INFO: StatefulSet ss has not reached scale 0, at 1
May 12 13:28:24.599: INFO: POD   NODE         PHASE    GRACE  CONDITIONS
May 12 13:28:24.599: INFO: ss-0  kali-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:33 +0000 UTC  }]
May 12 13:28:24.599: INFO: 
May 12 13:28:24.599: INFO: StatefulSet ss has not reached scale 0, at 1
May 12 13:28:25.718: INFO: POD   NODE         PHASE    GRACE  CONDITIONS
May 12 13:28:25.719: INFO: ss-0  kali-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:33 +0000 UTC  }]
May 12 13:28:25.719: INFO: 
May 12 13:28:25.719: INFO: StatefulSet ss has not reached scale 0, at 1
May 12 13:28:26.722: INFO: POD   NODE         PHASE    GRACE  CONDITIONS
May 12 13:28:26.722: INFO: ss-0  kali-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:33 +0000 UTC  }]
May 12 13:28:26.722: INFO: 
May 12 13:28:26.722: INFO: StatefulSet ss has not reached scale 0, at 1
May 12 13:28:27.725: INFO: POD   NODE         PHASE    GRACE  CONDITIONS
May 12 13:28:27.725: INFO: ss-0  kali-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:33 +0000 UTC  }]
May 12 13:28:27.725: INFO: 
May 12 13:28:27.725: INFO: StatefulSet ss has not reached scale 0, at 1
May 12 13:28:28.728: INFO: POD   NODE         PHASE    GRACE  CONDITIONS
May 12 13:28:28.728: INFO: ss-0  kali-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:27:33 +0000 UTC  }]
May 12 13:28:28.728: INFO: 
May 12 13:28:28.728: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8542
May 12 13:28:29.732: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:28:31.701: INFO: rc: 1
May 12 13:28:31.701: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
May 12 13:28:41.701: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:28:41.800: INFO: rc: 1
May 12 13:28:41.800: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 12 13:28:51.800: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:28:51.905: INFO: rc: 1
May 12 13:28:51.905: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 12 13:29:01.905: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:29:01.998: INFO: rc: 1
May 12 13:29:01.998: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 12 13:29:11.999: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:29:12.093: INFO: rc: 1
May 12 13:29:12.093: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 12 13:29:22.093: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:29:22.392: INFO: rc: 1
May 12 13:29:22.392: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 12 13:29:32.392: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:29:32.499: INFO: rc: 1
May 12 13:29:32.499: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 12 13:29:42.499: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:29:42.583: INFO: rc: 1
May 12 13:29:42.583: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 12 13:29:52.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:29:52.725: INFO: rc: 1
May 12 13:29:52.725: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 12 13:30:02.725: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:30:02.807: INFO: rc: 1
May 12 13:30:02.807: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 12 13:30:12.807: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:30:12.966: INFO: rc: 1
May 12 13:30:12.966: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 12 13:30:22.966: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:30:23.168: INFO: rc: 1
May 12 13:30:23.168: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 12 13:30:33.169: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:30:33.278: INFO: rc: 1
May 12 13:30:33.278: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 12 13:30:43.278: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:30:43.391: INFO: rc: 1
May 12 13:30:43.391: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 12 13:30:53.391: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:30:53.492: INFO: rc: 1
May 12 13:30:53.492: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 12 13:31:03.492: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:31:03.596: INFO: rc: 1
May 12 13:31:03.596: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 12 13:31:13.597: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:31:13.698: INFO: rc: 1
May 12 13:31:13.698: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 12 13:31:23.698: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:31:23.798: INFO: rc: 1
May 12 13:31:23.798: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 12 13:31:33.798: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:31:33.886: INFO: rc: 1
May 12 13:31:33.886: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 12 13:31:43.887: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:31:43.997: INFO: rc: 1
May 12 13:31:43.997: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 12 13:31:53.997: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:31:55.027: INFO: rc: 1
May 12 13:31:55.027: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 12 13:32:05.027: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:32:05.150: INFO: rc: 1
May 12 13:32:05.150: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 12 13:32:15.151: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:32:15.236: INFO: rc: 1
May 12 13:32:15.236: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 12 13:32:25.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:32:25.339: INFO: rc: 1
May 12 13:32:25.339: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 12 13:32:35.339: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:32:35.431: INFO: rc: 1
May 12 13:32:35.431: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 12 13:32:45.431: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:32:45.524: INFO: rc: 1
May 12 13:32:45.524: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 12 13:32:55.524: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:32:55.636: INFO: rc: 1
May 12 13:32:55.636: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 12 13:33:05.637: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:33:05.748: INFO: rc: 1
May 12 13:33:05.748: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 12 13:33:15.748: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:33:15.841: INFO: rc: 1
May 12 13:33:15.841: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 12 13:33:25.842: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:33:25.925: INFO: rc: 1
May 12 13:33:25.925: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May 12 13:33:35.925: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May 12 13:33:36.030: INFO: rc: 1
May 12 13:33:36.030: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: 
May 12 13:33:36.030: INFO: Scaling statefulset ss to 0
May 12 13:33:36.037: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
May 12 13:33:36.039: INFO: Deleting all statefulset in ns statefulset-8542
May 12 13:33:36.040: INFO: Scaling statefulset ss to 0
May 12 13:33:36.047: INFO: Waiting for statefulset status.replicas updated to 0
May 12 13:33:36.049: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:33:36.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8542" for this suite.

• [SLOW TEST:362.956 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":201,"skipped":3326,"failed":0}
SSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:33:36.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-projected-all-test-volume-5d987cc4-cc12-45fb-aef0-e444b75db944
STEP: Creating secret with name secret-projected-all-test-volume-6248c3d7-e0fc-456f-b13e-2cfdcdb53f8b
STEP: Creating a pod to test Check all projections for projected volume plugin
May 12 13:33:36.739: INFO: Waiting up to 5m0s for pod "projected-volume-da1886ec-afad-43d2-ba7b-1a9d094b6fd7" in namespace "projected-54" to be "Succeeded or Failed"
May 12 13:33:36.776: INFO: Pod "projected-volume-da1886ec-afad-43d2-ba7b-1a9d094b6fd7": Phase="Pending", Reason="", readiness=false. Elapsed: 37.003123ms
May 12 13:33:38.782: INFO: Pod "projected-volume-da1886ec-afad-43d2-ba7b-1a9d094b6fd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042782264s
May 12 13:33:40.877: INFO: Pod "projected-volume-da1886ec-afad-43d2-ba7b-1a9d094b6fd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.137115751s
STEP: Saw pod success
May 12 13:33:40.877: INFO: Pod "projected-volume-da1886ec-afad-43d2-ba7b-1a9d094b6fd7" satisfied condition "Succeeded or Failed"
May 12 13:33:40.879: INFO: Trying to get logs from node kali-worker2 pod projected-volume-da1886ec-afad-43d2-ba7b-1a9d094b6fd7 container projected-all-volume-test: 
STEP: delete the pod
May 12 13:33:40.959: INFO: Waiting for pod projected-volume-da1886ec-afad-43d2-ba7b-1a9d094b6fd7 to disappear
May 12 13:33:41.050: INFO: Pod projected-volume-da1886ec-afad-43d2-ba7b-1a9d094b6fd7 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:33:41.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-54" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":202,"skipped":3329,"failed":0}
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:33:41.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-3613
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
May 12 13:33:42.033: INFO: Found 0 stateful pods, waiting for 3
May 12 13:33:52.081: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May 12 13:33:52.081: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May 12 13:33:52.081: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
May 12 13:34:02.037: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May 12 13:34:02.037: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May 12 13:34:02.037: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
May 12 13:34:02.064: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
May 12 13:34:12.120: INFO: Updating stateful set ss2
May 12 13:34:12.483: INFO: Waiting for Pod statefulset-3613/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May 12 13:34:22.498: INFO: Waiting for Pod statefulset-3613/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
May 12 13:34:33.452: INFO: Found 2 stateful pods, waiting for 3
May 12 13:34:43.488: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May 12 13:34:43.488: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May 12 13:34:43.488: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
May 12 13:34:43.509: INFO: Updating stateful set ss2
May 12 13:34:43.561: INFO: Waiting for Pod statefulset-3613/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May 12 13:34:53.715: INFO: Updating stateful set ss2
May 12 13:34:53.752: INFO: Waiting for StatefulSet statefulset-3613/ss2 to complete update
May 12 13:34:53.752: INFO: Waiting for Pod statefulset-3613/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May 12 13:35:04.046: INFO: Waiting for StatefulSet statefulset-3613/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
May 12 13:35:13.758: INFO: Deleting all statefulset in ns statefulset-3613
May 12 13:35:13.760: INFO: Scaling statefulset ss2 to 0
May 12 13:35:43.829: INFO: Waiting for statefulset status.replicas updated to 0
May 12 13:35:43.832: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:35:43.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3613" for this suite.

• [SLOW TEST:122.799 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":203,"skipped":3333,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:35:43.860: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: running the image docker.io/library/httpd:2.4.38-alpine
May 12 13:35:44.072: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5757'
May 12 13:35:44.259: INFO: stderr: ""
May 12 13:35:44.259: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423
May 12 13:35:44.380: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5757'
May 12 13:35:50.624: INFO: stderr: ""
May 12 13:35:50.624: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:35:50.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5757" for this suite.

• [SLOW TEST:6.842 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":275,"completed":204,"skipped":3339,"failed":0}
SS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:35:50.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 12 13:35:52.993: INFO: Pod name cleanup-pod: Found 0 pods out of 1
May 12 13:35:58.109: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
May 12 13:36:00.324: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
May 12 13:36:00.759: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-202 /apis/apps/v1/namespaces/deployment-202/deployments/test-cleanup-deployment b00278aa-12b1-49c8-866b-a1322c99b456 3740784 1 2020-05-12 13:36:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  [{e2e.test Update apps/v1 2020-05-12 13:36:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00538ba78  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},}

May 12 13:36:01.516: INFO: New ReplicaSet "test-cleanup-deployment-b4867b47f" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-b4867b47f  deployment-202 /apis/apps/v1/namespaces/deployment-202/replicasets/test-cleanup-deployment-b4867b47f f3d47114-f978-4e59-8b92-9b7dcb3a6f9b 3740786 1 2020-05-12 13:36:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:b4867b47f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment b00278aa-12b1-49c8-866b-a1322c99b456 0xc00311d130 0xc00311d131}] []  [{kube-controller-manager Update apps/v1 2020-05-12 13:36:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 48 48 50 55 56 97 97 45 49 50 98 49 45 52 57 99 56 45 56 54 54 98 45 97 49 51 50 50 99 57 57 98 52 53 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: b4867b47f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00311d1a8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May 12 13:36:01.517: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
May 12 13:36:01.517: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller  deployment-202 /apis/apps/v1/namespaces/deployment-202/replicasets/test-cleanup-controller 7da42142-c858-4248-94ff-ab1f71597a49 3740785 1 2020-05-12 13:35:52 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment b00278aa-12b1-49c8-866b-a1322c99b456 0xc00311d027 0xc00311d028}] []  [{e2e.test Update apps/v1 2020-05-12 13:35:52 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-12 13:36:00 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 48 48 50 55 56 97 97 45 49 50 98 49 45 52 57 99 56 45 56 54 54 98 45 97 49 51 50 50 99 57 57 98 52 53 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00311d0c8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
May 12 13:36:02.301: INFO: Pod "test-cleanup-controller-bp7s7" is available:
&Pod{ObjectMeta:{test-cleanup-controller-bp7s7 test-cleanup-controller- deployment-202 /api/v1/namespaces/deployment-202/pods/test-cleanup-controller-bp7s7 c8df3bf4-41a7-474b-8e2b-fc6700a1bc5d 3740761 0 2020-05-12 13:35:52 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 7da42142-c858-4248-94ff-ab1f71597a49 0xc0052c9e27 0xc0052c9e28}] []  [{kube-controller-manager Update v1 2020-05-12 13:35:52 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 100 97 52 50 49 52 50 45 99 56 53 56 45 52 50 52 56 45 57 52 102 102 45 97 98 49 102 55 49 53 57 55 97 52 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 13:35:58 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 53 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qqbsq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qqbsq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qqbsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 13:35:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 13:35:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 13:35:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 13:35:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.158,StartTime:2020-05-12 13:35:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 13:35:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f0b3ab09b5d93f55cbafb7dd5bfb7a1193c8c6c41d464ef854a103b6d1213284,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.158,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 12 13:36:02.301: INFO: Pod "test-cleanup-deployment-b4867b47f-nx296" is not available:
&Pod{ObjectMeta:{test-cleanup-deployment-b4867b47f-nx296 test-cleanup-deployment-b4867b47f- deployment-202 /api/v1/namespaces/deployment-202/pods/test-cleanup-deployment-b4867b47f-nx296 1ac242c4-5b0b-4c14-a02e-b612e2f80335 3740792 0 2020-05-12 13:36:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-b4867b47f f3d47114-f978-4e59-8b92-9b7dcb3a6f9b 0xc0052c9fe0 0xc0052c9fe1}] []  [{kube-controller-manager Update v1 2020-05-12 13:36:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 51 100 52 55 49 49 52 45 102 57 55 56 45 52 101 53 57 45 56 98 57 50 45 57 98 55 100 99 98 51 97 54 102 57 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qqbsq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qqbsq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qqbsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 13:36:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:36:02.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-202" for this suite.

• [SLOW TEST:12.032 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":205,"skipped":3341,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:36:02.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: Gathering metrics
W0512 13:36:09.922379       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May 12 13:36:09.922: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:36:09.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4277" for this suite.

• [SLOW TEST:7.612 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":206,"skipped":3351,"failed":0}
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:36:10.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 12 13:36:11.488: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
May 12 13:36:11.724: INFO: Number of nodes with available pods: 0
May 12 13:36:11.724: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
May 12 13:36:12.341: INFO: Number of nodes with available pods: 0
May 12 13:36:12.341: INFO: Node kali-worker is running more than one daemon pod
May 12 13:36:13.552: INFO: Number of nodes with available pods: 0
May 12 13:36:13.552: INFO: Node kali-worker is running more than one daemon pod
May 12 13:36:14.467: INFO: Number of nodes with available pods: 0
May 12 13:36:14.467: INFO: Node kali-worker is running more than one daemon pod
May 12 13:36:15.344: INFO: Number of nodes with available pods: 0
May 12 13:36:15.344: INFO: Node kali-worker is running more than one daemon pod
May 12 13:36:16.347: INFO: Number of nodes with available pods: 1
May 12 13:36:16.347: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
May 12 13:36:16.448: INFO: Number of nodes with available pods: 1
May 12 13:36:16.448: INFO: Number of running nodes: 0, number of available pods: 1
May 12 13:36:17.943: INFO: Number of nodes with available pods: 0
May 12 13:36:17.943: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
May 12 13:36:18.303: INFO: Number of nodes with available pods: 0
May 12 13:36:18.303: INFO: Node kali-worker is running more than one daemon pod
May 12 13:36:19.881: INFO: Number of nodes with available pods: 0
May 12 13:36:19.881: INFO: Node kali-worker is running more than one daemon pod
May 12 13:36:20.653: INFO: Number of nodes with available pods: 0
May 12 13:36:20.653: INFO: Node kali-worker is running more than one daemon pod
May 12 13:36:21.380: INFO: Number of nodes with available pods: 0
May 12 13:36:21.380: INFO: Node kali-worker is running more than one daemon pod
May 12 13:36:22.308: INFO: Number of nodes with available pods: 0
May 12 13:36:22.308: INFO: Node kali-worker is running more than one daemon pod
May 12 13:36:23.326: INFO: Number of nodes with available pods: 0
May 12 13:36:23.326: INFO: Node kali-worker is running more than one daemon pod
May 12 13:36:24.313: INFO: Number of nodes with available pods: 0
May 12 13:36:24.314: INFO: Node kali-worker is running more than one daemon pod
May 12 13:36:25.307: INFO: Number of nodes with available pods: 0
May 12 13:36:25.307: INFO: Node kali-worker is running more than one daemon pod
May 12 13:36:26.307: INFO: Number of nodes with available pods: 0
May 12 13:36:26.307: INFO: Node kali-worker is running more than one daemon pod
May 12 13:36:27.313: INFO: Number of nodes with available pods: 0
May 12 13:36:27.313: INFO: Node kali-worker is running more than one daemon pod
May 12 13:36:28.307: INFO: Number of nodes with available pods: 0
May 12 13:36:28.307: INFO: Node kali-worker is running more than one daemon pod
May 12 13:36:29.343: INFO: Number of nodes with available pods: 0
May 12 13:36:29.343: INFO: Node kali-worker is running more than one daemon pod
May 12 13:36:30.307: INFO: Number of nodes with available pods: 0
May 12 13:36:30.307: INFO: Node kali-worker is running more than one daemon pod
May 12 13:36:31.381: INFO: Number of nodes with available pods: 0
May 12 13:36:31.381: INFO: Node kali-worker is running more than one daemon pod
May 12 13:36:32.306: INFO: Number of nodes with available pods: 0
May 12 13:36:32.306: INFO: Node kali-worker is running more than one daemon pod
May 12 13:36:33.306: INFO: Number of nodes with available pods: 0
May 12 13:36:33.306: INFO: Node kali-worker is running more than one daemon pod
May 12 13:36:34.418: INFO: Number of nodes with available pods: 0
May 12 13:36:34.419: INFO: Node kali-worker is running more than one daemon pod
May 12 13:36:35.407: INFO: Number of nodes with available pods: 0
May 12 13:36:35.407: INFO: Node kali-worker is running more than one daemon pod
May 12 13:36:36.306: INFO: Number of nodes with available pods: 0
May 12 13:36:36.306: INFO: Node kali-worker is running more than one daemon pod
May 12 13:36:37.306: INFO: Number of nodes with available pods: 0
May 12 13:36:37.306: INFO: Node kali-worker is running more than one daemon pod
May 12 13:36:38.310: INFO: Number of nodes with available pods: 0
May 12 13:36:38.310: INFO: Node kali-worker is running more than one daemon pod
May 12 13:36:39.429: INFO: Number of nodes with available pods: 0
May 12 13:36:39.429: INFO: Node kali-worker is running more than one daemon pod
May 12 13:36:40.388: INFO: Number of nodes with available pods: 1
May 12 13:36:40.388: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4070, will wait for the garbage collector to delete the pods
May 12 13:36:40.653: INFO: Deleting DaemonSet.extensions daemon-set took: 137.339126ms
May 12 13:36:41.053: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.234973ms
May 12 13:36:54.038: INFO: Number of nodes with available pods: 0
May 12 13:36:54.038: INFO: Number of running nodes: 0, number of available pods: 0
May 12 13:36:54.045: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4070/daemonsets","resourceVersion":"3741050"},"items":null}

May 12 13:36:54.050: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4070/pods","resourceVersion":"3741050"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:36:54.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4070" for this suite.

• [SLOW TEST:43.839 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":207,"skipped":3355,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:36:54.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-6633
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 12 13:36:54.332: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 12 13:36:54.484: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 12 13:36:56.886: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 12 13:36:58.692: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 12 13:37:00.561: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 12 13:37:02.852: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 12 13:37:04.630: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 12 13:37:06.686: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 12 13:37:08.589: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 12 13:37:10.495: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 12 13:37:10.501: INFO: The status of Pod netserver-1 is Running (Ready = false)
May 12 13:37:12.506: INFO: The status of Pod netserver-1 is Running (Ready = false)
May 12 13:37:14.504: INFO: The status of Pod netserver-1 is Running (Ready = false)
May 12 13:37:16.505: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 12 13:37:22.786: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.161:8080/dial?request=hostname&protocol=udp&host=10.244.2.94&port=8081&tries=1'] Namespace:pod-network-test-6633 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 12 13:37:22.786: INFO: >>> kubeConfig: /root/.kube/config
I0512 13:37:22.820846       7 log.go:172] (0xc002e58420) (0xc0016d1680) Create stream
I0512 13:37:22.820872       7 log.go:172] (0xc002e58420) (0xc0016d1680) Stream added, broadcasting: 1
I0512 13:37:22.822394       7 log.go:172] (0xc002e58420) Reply frame received for 1
I0512 13:37:22.822426       7 log.go:172] (0xc002e58420) (0xc0017081e0) Create stream
I0512 13:37:22.822438       7 log.go:172] (0xc002e58420) (0xc0017081e0) Stream added, broadcasting: 3
I0512 13:37:22.822966       7 log.go:172] (0xc002e58420) Reply frame received for 3
I0512 13:37:22.822987       7 log.go:172] (0xc002e58420) (0xc0016d17c0) Create stream
I0512 13:37:22.822994       7 log.go:172] (0xc002e58420) (0xc0016d17c0) Stream added, broadcasting: 5
I0512 13:37:22.823514       7 log.go:172] (0xc002e58420) Reply frame received for 5
I0512 13:37:22.911872       7 log.go:172] (0xc002e58420) Data frame received for 3
I0512 13:37:22.911931       7 log.go:172] (0xc0017081e0) (3) Data frame handling
I0512 13:37:22.911967       7 log.go:172] (0xc0017081e0) (3) Data frame sent
I0512 13:37:22.912105       7 log.go:172] (0xc002e58420) Data frame received for 5
I0512 13:37:22.912130       7 log.go:172] (0xc0016d17c0) (5) Data frame handling
I0512 13:37:22.912147       7 log.go:172] (0xc002e58420) Data frame received for 3
I0512 13:37:22.912155       7 log.go:172] (0xc0017081e0) (3) Data frame handling
I0512 13:37:22.913830       7 log.go:172] (0xc002e58420) Data frame received for 1
I0512 13:37:22.913872       7 log.go:172] (0xc0016d1680) (1) Data frame handling
I0512 13:37:22.913906       7 log.go:172] (0xc0016d1680) (1) Data frame sent
I0512 13:37:22.913924       7 log.go:172] (0xc002e58420) (0xc0016d1680) Stream removed, broadcasting: 1
I0512 13:37:22.913942       7 log.go:172] (0xc002e58420) Go away received
I0512 13:37:22.914106       7 log.go:172] (0xc002e58420) (0xc0016d1680) Stream removed, broadcasting: 1
I0512 13:37:22.914142       7 log.go:172] (0xc002e58420) (0xc0017081e0) Stream removed, broadcasting: 3
I0512 13:37:22.914167       7 log.go:172] (0xc002e58420) (0xc0016d17c0) Stream removed, broadcasting: 5
May 12 13:37:22.914: INFO: Waiting for responses: map[]
May 12 13:37:22.917: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.161:8080/dial?request=hostname&protocol=udp&host=10.244.1.160&port=8081&tries=1'] Namespace:pod-network-test-6633 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 12 13:37:22.917: INFO: >>> kubeConfig: /root/.kube/config
I0512 13:37:22.944403       7 log.go:172] (0xc002b702c0) (0xc001708820) Create stream
I0512 13:37:22.944431       7 log.go:172] (0xc002b702c0) (0xc001708820) Stream added, broadcasting: 1
I0512 13:37:22.946748       7 log.go:172] (0xc002b702c0) Reply frame received for 1
I0512 13:37:22.946787       7 log.go:172] (0xc002b702c0) (0xc001eb4280) Create stream
I0512 13:37:22.946802       7 log.go:172] (0xc002b702c0) (0xc001eb4280) Stream added, broadcasting: 3
I0512 13:37:22.947727       7 log.go:172] (0xc002b702c0) Reply frame received for 3
I0512 13:37:22.947761       7 log.go:172] (0xc002b702c0) (0xc0016d1c20) Create stream
I0512 13:37:22.947774       7 log.go:172] (0xc002b702c0) (0xc0016d1c20) Stream added, broadcasting: 5
I0512 13:37:22.948715       7 log.go:172] (0xc002b702c0) Reply frame received for 5
I0512 13:37:23.023748       7 log.go:172] (0xc002b702c0) Data frame received for 3
I0512 13:37:23.023779       7 log.go:172] (0xc001eb4280) (3) Data frame handling
I0512 13:37:23.023807       7 log.go:172] (0xc001eb4280) (3) Data frame sent
I0512 13:37:23.024466       7 log.go:172] (0xc002b702c0) Data frame received for 5
I0512 13:37:23.024499       7 log.go:172] (0xc0016d1c20) (5) Data frame handling
I0512 13:37:23.024535       7 log.go:172] (0xc002b702c0) Data frame received for 3
I0512 13:37:23.024558       7 log.go:172] (0xc001eb4280) (3) Data frame handling
I0512 13:37:23.026352       7 log.go:172] (0xc002b702c0) Data frame received for 1
I0512 13:37:23.026375       7 log.go:172] (0xc001708820) (1) Data frame handling
I0512 13:37:23.026407       7 log.go:172] (0xc001708820) (1) Data frame sent
I0512 13:37:23.026477       7 log.go:172] (0xc002b702c0) (0xc001708820) Stream removed, broadcasting: 1
I0512 13:37:23.026619       7 log.go:172] (0xc002b702c0) (0xc001708820) Stream removed, broadcasting: 1
I0512 13:37:23.026647       7 log.go:172] (0xc002b702c0) (0xc001eb4280) Stream removed, broadcasting: 3
I0512 13:37:23.026668       7 log.go:172] (0xc002b702c0) (0xc0016d1c20) Stream removed, broadcasting: 5
May 12 13:37:23.026: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
I0512 13:37:23.026764       7 log.go:172] (0xc002b702c0) Go away received
May 12 13:37:23.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6633" for this suite.

• [SLOW TEST:28.847 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":208,"skipped":3365,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:37:23.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
May 12 13:37:23.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9578'
May 12 13:37:23.645: INFO: stderr: ""
May 12 13:37:23.645: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
May 12 13:37:23.645: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9578'
May 12 13:37:23.775: INFO: stderr: ""
May 12 13:37:23.775: INFO: stdout: "update-demo-nautilus-fmwkj update-demo-nautilus-znjt4 "
May 12 13:37:23.775: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fmwkj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9578'
May 12 13:37:23.887: INFO: stderr: ""
May 12 13:37:23.887: INFO: stdout: ""
May 12 13:37:23.887: INFO: update-demo-nautilus-fmwkj is created but not running
May 12 13:37:28.888: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9578'
May 12 13:37:30.856: INFO: stderr: ""
May 12 13:37:30.856: INFO: stdout: "update-demo-nautilus-fmwkj update-demo-nautilus-znjt4 "
May 12 13:37:30.856: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fmwkj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9578'
May 12 13:37:31.096: INFO: stderr: ""
May 12 13:37:31.096: INFO: stdout: "true"
May 12 13:37:31.096: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fmwkj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9578'
May 12 13:37:31.194: INFO: stderr: ""
May 12 13:37:31.194: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May 12 13:37:31.194: INFO: validating pod update-demo-nautilus-fmwkj
May 12 13:37:31.198: INFO: got data: {
  "image": "nautilus.jpg"
}

May 12 13:37:31.198: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May 12 13:37:31.198: INFO: update-demo-nautilus-fmwkj is verified up and running
May 12 13:37:31.198: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-znjt4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9578'
May 12 13:37:31.297: INFO: stderr: ""
May 12 13:37:31.297: INFO: stdout: "true"
May 12 13:37:31.298: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-znjt4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9578'
May 12 13:37:31.384: INFO: stderr: ""
May 12 13:37:31.384: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May 12 13:37:31.384: INFO: validating pod update-demo-nautilus-znjt4
May 12 13:37:31.388: INFO: got data: {
  "image": "nautilus.jpg"
}

May 12 13:37:31.389: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May 12 13:37:31.389: INFO: update-demo-nautilus-znjt4 is verified up and running
STEP: using delete to clean up resources
May 12 13:37:31.389: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9578'
May 12 13:37:31.528: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 12 13:37:31.528: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
May 12 13:37:31.528: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9578'
May 12 13:37:31.646: INFO: stderr: "No resources found in kubectl-9578 namespace.\n"
May 12 13:37:31.646: INFO: stdout: ""
May 12 13:37:31.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9578 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May 12 13:37:31.749: INFO: stderr: ""
May 12 13:37:31.749: INFO: stdout: "update-demo-nautilus-fmwkj\nupdate-demo-nautilus-znjt4\n"
May 12 13:37:32.250: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9578'
May 12 13:37:32.714: INFO: stderr: "No resources found in kubectl-9578 namespace.\n"
May 12 13:37:32.714: INFO: stdout: ""
May 12 13:37:32.714: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9578 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May 12 13:37:32.870: INFO: stderr: ""
May 12 13:37:32.870: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:37:32.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9578" for this suite.

• [SLOW TEST:10.140 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":275,"completed":209,"skipped":3367,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:37:33.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
May 12 13:37:33.728: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the sample API server.
May 12 13:37:34.585: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
May 12 13:37:38.064: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887454, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887454, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887454, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887454, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 12 13:37:40.137: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887454, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887454, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887454, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887454, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 12 13:37:42.184: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887454, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887454, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887454, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887454, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 12 13:37:44.225: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887454, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887454, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887454, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887454, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 12 13:37:46.693: INFO: Waited 620.092601ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:37:50.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-5155" for this suite.

• [SLOW TEST:17.321 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":210,"skipped":3505,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:37:50.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
May 12 13:37:51.212: INFO: Waiting up to 5m0s for pod "pod-295d6776-ce5f-4ca6-b21b-9103f49aee73" in namespace "emptydir-3578" to be "Succeeded or Failed"
May 12 13:37:51.484: INFO: Pod "pod-295d6776-ce5f-4ca6-b21b-9103f49aee73": Phase="Pending", Reason="", readiness=false. Elapsed: 271.789644ms
May 12 13:37:53.964: INFO: Pod "pod-295d6776-ce5f-4ca6-b21b-9103f49aee73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.751580751s
May 12 13:37:56.193: INFO: Pod "pod-295d6776-ce5f-4ca6-b21b-9103f49aee73": Phase="Pending", Reason="", readiness=false. Elapsed: 4.981039604s
May 12 13:37:58.388: INFO: Pod "pod-295d6776-ce5f-4ca6-b21b-9103f49aee73": Phase="Pending", Reason="", readiness=false. Elapsed: 7.176464008s
May 12 13:38:00.392: INFO: Pod "pod-295d6776-ce5f-4ca6-b21b-9103f49aee73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.18000006s
STEP: Saw pod success
May 12 13:38:00.392: INFO: Pod "pod-295d6776-ce5f-4ca6-b21b-9103f49aee73" satisfied condition "Succeeded or Failed"
May 12 13:38:00.555: INFO: Trying to get logs from node kali-worker pod pod-295d6776-ce5f-4ca6-b21b-9103f49aee73 container test-container: 
STEP: delete the pod
May 12 13:38:00.611: INFO: Waiting for pod pod-295d6776-ce5f-4ca6-b21b-9103f49aee73 to disappear
May 12 13:38:00.621: INFO: Pod pod-295d6776-ce5f-4ca6-b21b-9103f49aee73 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:38:00.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3578" for this suite.

• [SLOW TEST:10.230 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":211,"skipped":3544,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:38:00.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 12 13:38:01.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
May 12 13:38:03.996: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2451 create -f -'
May 12 13:38:08.526: INFO: stderr: ""
May 12 13:38:08.526: INFO: stdout: "e2e-test-crd-publish-openapi-9952-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
May 12 13:38:08.526: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2451 delete e2e-test-crd-publish-openapi-9952-crds test-cr'
May 12 13:38:08.623: INFO: stderr: ""
May 12 13:38:08.623: INFO: stdout: "e2e-test-crd-publish-openapi-9952-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
May 12 13:38:08.623: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2451 apply -f -'
May 12 13:38:08.923: INFO: stderr: ""
May 12 13:38:08.923: INFO: stdout: "e2e-test-crd-publish-openapi-9952-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
May 12 13:38:08.923: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2451 delete e2e-test-crd-publish-openapi-9952-crds test-cr'
May 12 13:38:09.031: INFO: stderr: ""
May 12 13:38:09.031: INFO: stdout: "e2e-test-crd-publish-openapi-9952-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
May 12 13:38:09.031: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9952-crds'
May 12 13:38:09.470: INFO: stderr: ""
May 12 13:38:09.470: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9952-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:38:12.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2451" for this suite.

• [SLOW TEST:11.742 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":212,"skipped":3549,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:38:12.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-33b612aa-f130-4c3a-ad3a-63733fc0ffca in namespace container-probe-7901
May 12 13:38:20.594: INFO: Started pod busybox-33b612aa-f130-4c3a-ad3a-63733fc0ffca in namespace container-probe-7901
STEP: checking the pod's current state and verifying that restartCount is present
May 12 13:38:20.597: INFO: Initial restart count of pod busybox-33b612aa-f130-4c3a-ad3a-63733fc0ffca is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:42:21.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7901" for this suite.

• [SLOW TEST:249.169 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":213,"skipped":3608,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:42:21.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8768.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8768.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8768.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8768.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-8768.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8768.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May 12 13:42:33.886: INFO: DNS probes using dns-8768/dns-test-cb40b6e8-7b64-4c70-966c-81f6d5b1bb8d succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:42:34.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8768" for this suite.

• [SLOW TEST:13.472 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":214,"skipped":3666,"failed":0}
SS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:42:35.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-ab6046a4-d93b-4c54-b6fb-b7ed4528d7b8 in namespace container-probe-3113
May 12 13:42:44.177: INFO: Started pod liveness-ab6046a4-d93b-4c54-b6fb-b7ed4528d7b8 in namespace container-probe-3113
STEP: checking the pod's current state and verifying that restartCount is present
May 12 13:42:44.179: INFO: Initial restart count of pod liveness-ab6046a4-d93b-4c54-b6fb-b7ed4528d7b8 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:46:45.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3113" for this suite.

• [SLOW TEST:250.772 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":215,"skipped":3668,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:46:45.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 12 13:46:48.596: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 12 13:46:50.607: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888008, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888008, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888008, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888008, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 12 13:46:52.703: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888008, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888008, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888008, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888008, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 12 13:46:55.645: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:47:08.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3072" for this suite.
STEP: Destroying namespace "webhook-3072-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:23.476 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":216,"skipped":3672,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:47:09.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-4d464990-2762-4e24-ae45-810edc55e637
STEP: Creating secret with name s-test-opt-upd-2cded265-6486-4b40-b5f8-a07e25604917
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-4d464990-2762-4e24-ae45-810edc55e637
STEP: Updating secret s-test-opt-upd-2cded265-6486-4b40-b5f8-a07e25604917
STEP: Creating secret with name s-test-opt-create-c0d02af9-ba16-4d3f-8990-46113b027ce9
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:48:35.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-854" for this suite.

• [SLOW TEST:85.931 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":217,"skipped":3673,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:48:35.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
May 12 13:48:35.414: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:48:35.574: INFO: Number of nodes with available pods: 0
May 12 13:48:35.574: INFO: Node kali-worker is running more than one daemon pod
May 12 13:48:36.751: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:48:36.965: INFO: Number of nodes with available pods: 0
May 12 13:48:36.965: INFO: Node kali-worker is running more than one daemon pod
May 12 13:48:37.578: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:48:37.581: INFO: Number of nodes with available pods: 0
May 12 13:48:37.581: INFO: Node kali-worker is running more than one daemon pod
May 12 13:48:38.748: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:48:38.783: INFO: Number of nodes with available pods: 0
May 12 13:48:38.783: INFO: Node kali-worker is running more than one daemon pod
May 12 13:48:39.748: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:48:40.065: INFO: Number of nodes with available pods: 0
May 12 13:48:40.065: INFO: Node kali-worker is running more than one daemon pod
May 12 13:48:40.579: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:48:40.584: INFO: Number of nodes with available pods: 0
May 12 13:48:40.584: INFO: Node kali-worker is running more than one daemon pod
May 12 13:48:41.772: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:48:41.988: INFO: Number of nodes with available pods: 2
May 12 13:48:41.988: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
May 12 13:48:42.466: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:48:42.484: INFO: Number of nodes with available pods: 1
May 12 13:48:42.484: INFO: Node kali-worker is running more than one daemon pod
May 12 13:48:43.587: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:48:43.590: INFO: Number of nodes with available pods: 1
May 12 13:48:43.590: INFO: Node kali-worker is running more than one daemon pod
May 12 13:48:44.497: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:48:44.501: INFO: Number of nodes with available pods: 1
May 12 13:48:44.501: INFO: Node kali-worker is running more than one daemon pod
May 12 13:48:45.514: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:48:45.517: INFO: Number of nodes with available pods: 1
May 12 13:48:45.517: INFO: Node kali-worker is running more than one daemon pod
May 12 13:48:46.488: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:48:46.491: INFO: Number of nodes with available pods: 1
May 12 13:48:46.491: INFO: Node kali-worker is running more than one daemon pod
May 12 13:48:47.682: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:48:47.688: INFO: Number of nodes with available pods: 1
May 12 13:48:47.688: INFO: Node kali-worker is running more than one daemon pod
May 12 13:48:48.545: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:48:48.633: INFO: Number of nodes with available pods: 1
May 12 13:48:48.633: INFO: Node kali-worker is running more than one daemon pod
May 12 13:48:49.665: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:48:49.726: INFO: Number of nodes with available pods: 1
May 12 13:48:49.726: INFO: Node kali-worker is running more than one daemon pod
May 12 13:48:50.490: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:48:50.493: INFO: Number of nodes with available pods: 1
May 12 13:48:50.493: INFO: Node kali-worker is running more than one daemon pod
May 12 13:48:51.489: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:48:51.492: INFO: Number of nodes with available pods: 1
May 12 13:48:51.492: INFO: Node kali-worker is running more than one daemon pod
May 12 13:48:52.551: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:48:52.555: INFO: Number of nodes with available pods: 1
May 12 13:48:52.555: INFO: Node kali-worker is running more than one daemon pod
May 12 13:48:53.488: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:48:53.490: INFO: Number of nodes with available pods: 1
May 12 13:48:53.490: INFO: Node kali-worker is running more than one daemon pod
May 12 13:48:54.544: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:48:54.547: INFO: Number of nodes with available pods: 1
May 12 13:48:54.547: INFO: Node kali-worker is running more than one daemon pod
May 12 13:48:55.580: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:48:55.582: INFO: Number of nodes with available pods: 1
May 12 13:48:55.582: INFO: Node kali-worker is running more than one daemon pod
May 12 13:48:56.488: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:48:56.492: INFO: Number of nodes with available pods: 1
May 12 13:48:56.492: INFO: Node kali-worker is running more than one daemon pod
May 12 13:48:57.487: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:48:57.491: INFO: Number of nodes with available pods: 1
May 12 13:48:57.491: INFO: Node kali-worker is running more than one daemon pod
May 12 13:48:58.517: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 12 13:48:58.521: INFO: Number of nodes with available pods: 2
May 12 13:48:58.521: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4217, will wait for the garbage collector to delete the pods
May 12 13:48:58.620: INFO: Deleting DaemonSet.extensions daemon-set took: 8.249256ms
May 12 13:48:59.221: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.269904ms
May 12 13:49:13.824: INFO: Number of nodes with available pods: 0
May 12 13:49:13.824: INFO: Number of running nodes: 0, number of available pods: 0
May 12 13:49:13.827: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4217/daemonsets","resourceVersion":"3743618"},"items":null}

May 12 13:49:13.829: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4217/pods","resourceVersion":"3743618"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:49:13.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4217" for this suite.

• [SLOW TEST:38.552 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":218,"skipped":3692,"failed":0}
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:49:13.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-4bfbc4e5-e9bf-4906-99e5-2aa96e7f2a12
STEP: Creating a pod to test consume secrets
May 12 13:49:13.979: INFO: Waiting up to 5m0s for pod "pod-secrets-e39712f2-c216-4aaa-8988-1b39682342de" in namespace "secrets-2662" to be "Succeeded or Failed"
May 12 13:49:14.005: INFO: Pod "pod-secrets-e39712f2-c216-4aaa-8988-1b39682342de": Phase="Pending", Reason="", readiness=false. Elapsed: 25.624992ms
May 12 13:49:16.008: INFO: Pod "pod-secrets-e39712f2-c216-4aaa-8988-1b39682342de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029041533s
May 12 13:49:18.012: INFO: Pod "pod-secrets-e39712f2-c216-4aaa-8988-1b39682342de": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032953865s
May 12 13:49:20.016: INFO: Pod "pod-secrets-e39712f2-c216-4aaa-8988-1b39682342de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036423266s
STEP: Saw pod success
May 12 13:49:20.016: INFO: Pod "pod-secrets-e39712f2-c216-4aaa-8988-1b39682342de" satisfied condition "Succeeded or Failed"
May 12 13:49:20.018: INFO: Trying to get logs from node kali-worker pod pod-secrets-e39712f2-c216-4aaa-8988-1b39682342de container secret-volume-test: 
STEP: delete the pod
May 12 13:49:20.233: INFO: Waiting for pod pod-secrets-e39712f2-c216-4aaa-8988-1b39682342de to disappear
May 12 13:49:20.288: INFO: Pod pod-secrets-e39712f2-c216-4aaa-8988-1b39682342de no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:49:20.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2662" for this suite.

• [SLOW TEST:6.452 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":219,"skipped":3695,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:49:20.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-0101632f-2986-425e-a1de-286fa33ea00b
STEP: Creating a pod to test consume configMaps
May 12 13:49:20.581: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1f36e077-887a-458a-92da-71985ea3a16b" in namespace "projected-3321" to be "Succeeded or Failed"
May 12 13:49:20.612: INFO: Pod "pod-projected-configmaps-1f36e077-887a-458a-92da-71985ea3a16b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.941147ms
May 12 13:49:22.676: INFO: Pod "pod-projected-configmaps-1f36e077-887a-458a-92da-71985ea3a16b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09574677s
May 12 13:49:24.752: INFO: Pod "pod-projected-configmaps-1f36e077-887a-458a-92da-71985ea3a16b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.171454859s
May 12 13:49:26.755: INFO: Pod "pod-projected-configmaps-1f36e077-887a-458a-92da-71985ea3a16b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.174640279s
STEP: Saw pod success
May 12 13:49:26.755: INFO: Pod "pod-projected-configmaps-1f36e077-887a-458a-92da-71985ea3a16b" satisfied condition "Succeeded or Failed"
May 12 13:49:26.758: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-1f36e077-887a-458a-92da-71985ea3a16b container projected-configmap-volume-test: 
STEP: delete the pod
May 12 13:49:26.874: INFO: Waiting for pod pod-projected-configmaps-1f36e077-887a-458a-92da-71985ea3a16b to disappear
May 12 13:49:26.885: INFO: Pod pod-projected-configmaps-1f36e077-887a-458a-92da-71985ea3a16b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:49:26.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3321" for this suite.

• [SLOW TEST:6.594 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":220,"skipped":3715,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:49:26.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-xxrxn in namespace proxy-4995
I0512 13:49:27.166692       7 runners.go:190] Created replication controller with name: proxy-service-xxrxn, namespace: proxy-4995, replica count: 1
I0512 13:49:28.217042       7 runners.go:190] proxy-service-xxrxn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0512 13:49:29.217257       7 runners.go:190] proxy-service-xxrxn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0512 13:49:30.217419       7 runners.go:190] proxy-service-xxrxn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0512 13:49:31.217592       7 runners.go:190] proxy-service-xxrxn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0512 13:49:32.217789       7 runners.go:190] proxy-service-xxrxn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0512 13:49:33.217986       7 runners.go:190] proxy-service-xxrxn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0512 13:49:34.218233       7 runners.go:190] proxy-service-xxrxn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0512 13:49:35.218451       7 runners.go:190] proxy-service-xxrxn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0512 13:49:36.218662       7 runners.go:190] proxy-service-xxrxn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0512 13:49:37.218830       7 runners.go:190] proxy-service-xxrxn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0512 13:49:38.218981       7 runners.go:190] proxy-service-xxrxn Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May 12 13:49:38.424: INFO: setup took 11.388142866s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
May 12 13:49:38.433: INFO: (0) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:443/proxy/: test<... (200; 47.892016ms)
May 12 13:49:38.472: INFO: (0) /api/v1/namespaces/proxy-4995/services/proxy-service-xxrxn:portname2/proxy/: bar (200; 48.303928ms)
May 12 13:49:38.473: INFO: (0) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:1080/proxy/: ... (200; 48.581526ms)
May 12 13:49:38.473: INFO: (0) /api/v1/namespaces/proxy-4995/services/https:proxy-service-xxrxn:tlsportname2/proxy/: tls qux (200; 48.653481ms)
May 12 13:49:38.473: INFO: (0) /api/v1/namespaces/proxy-4995/services/http:proxy-service-xxrxn:portname2/proxy/: bar (200; 48.869066ms)
May 12 13:49:38.473: INFO: (0) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7/proxy/: test (200; 49.335119ms)
May 12 13:49:38.474: INFO: (0) /api/v1/namespaces/proxy-4995/services/http:proxy-service-xxrxn:portname1/proxy/: foo (200; 49.728264ms)
May 12 13:49:38.479: INFO: (1) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:1080/proxy/: test<... (200; 5.122726ms)
May 12 13:49:38.479: INFO: (1) /api/v1/namespaces/proxy-4995/services/https:proxy-service-xxrxn:tlsportname1/proxy/: tls baz (200; 5.193095ms)
May 12 13:49:38.479: INFO: (1) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:162/proxy/: bar (200; 5.427991ms)
May 12 13:49:38.479: INFO: (1) /api/v1/namespaces/proxy-4995/services/http:proxy-service-xxrxn:portname1/proxy/: foo (200; 5.530249ms)
May 12 13:49:38.479: INFO: (1) /api/v1/namespaces/proxy-4995/services/https:proxy-service-xxrxn:tlsportname2/proxy/: tls qux (200; 5.480098ms)
May 12 13:49:38.479: INFO: (1) /api/v1/namespaces/proxy-4995/services/proxy-service-xxrxn:portname1/proxy/: foo (200; 5.584141ms)
May 12 13:49:38.479: INFO: (1) /api/v1/namespaces/proxy-4995/services/proxy-service-xxrxn:portname2/proxy/: bar (200; 5.555607ms)
May 12 13:49:38.479: INFO: (1) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:462/proxy/: tls qux (200; 5.731958ms)
May 12 13:49:38.479: INFO: (1) /api/v1/namespaces/proxy-4995/services/http:proxy-service-xxrxn:portname2/proxy/: bar (200; 5.824814ms)
May 12 13:49:38.479: INFO: (1) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:162/proxy/: bar (200; 5.768226ms)
May 12 13:49:38.479: INFO: (1) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7/proxy/: test (200; 5.757076ms)
May 12 13:49:38.480: INFO: (1) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 5.876952ms)
May 12 13:49:38.480: INFO: (1) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:443/proxy/: ... (200; 6.667772ms)
May 12 13:49:38.484: INFO: (2) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 3.548429ms)
May 12 13:49:38.484: INFO: (2) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:443/proxy/: ... (200; 3.691358ms)
May 12 13:49:38.484: INFO: (2) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 3.758545ms)
May 12 13:49:38.484: INFO: (2) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:162/proxy/: bar (200; 3.687964ms)
May 12 13:49:38.484: INFO: (2) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:1080/proxy/: test<... (200; 3.712706ms)
May 12 13:49:38.484: INFO: (2) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7/proxy/: test (200; 3.68335ms)
May 12 13:49:38.484: INFO: (2) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:462/proxy/: tls qux (200; 3.779477ms)
May 12 13:49:38.485: INFO: (2) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:162/proxy/: bar (200; 4.574446ms)
May 12 13:49:38.485: INFO: (2) /api/v1/namespaces/proxy-4995/services/https:proxy-service-xxrxn:tlsportname1/proxy/: tls baz (200; 4.529519ms)
May 12 13:49:38.485: INFO: (2) /api/v1/namespaces/proxy-4995/services/proxy-service-xxrxn:portname2/proxy/: bar (200; 4.58899ms)
May 12 13:49:38.485: INFO: (2) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:460/proxy/: tls baz (200; 4.666458ms)
May 12 13:49:38.485: INFO: (2) /api/v1/namespaces/proxy-4995/services/http:proxy-service-xxrxn:portname2/proxy/: bar (200; 4.723151ms)
May 12 13:49:38.485: INFO: (2) /api/v1/namespaces/proxy-4995/services/https:proxy-service-xxrxn:tlsportname2/proxy/: tls qux (200; 4.921187ms)
May 12 13:49:38.485: INFO: (2) /api/v1/namespaces/proxy-4995/services/http:proxy-service-xxrxn:portname1/proxy/: foo (200; 4.927454ms)
May 12 13:49:38.486: INFO: (2) /api/v1/namespaces/proxy-4995/services/proxy-service-xxrxn:portname1/proxy/: foo (200; 5.382886ms)
May 12 13:49:38.488: INFO: (3) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:162/proxy/: bar (200; 1.762292ms)
May 12 13:49:38.490: INFO: (3) /api/v1/namespaces/proxy-4995/services/https:proxy-service-xxrxn:tlsportname2/proxy/: tls qux (200; 4.262131ms)
May 12 13:49:38.490: INFO: (3) /api/v1/namespaces/proxy-4995/services/http:proxy-service-xxrxn:portname1/proxy/: foo (200; 4.304961ms)
May 12 13:49:38.490: INFO: (3) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:462/proxy/: tls qux (200; 4.459153ms)
May 12 13:49:38.490: INFO: (3) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 4.586501ms)
May 12 13:49:38.491: INFO: (3) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 4.80523ms)
May 12 13:49:38.491: INFO: (3) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:162/proxy/: bar (200; 4.793045ms)
May 12 13:49:38.491: INFO: (3) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:1080/proxy/: test<... (200; 5.063438ms)
May 12 13:49:38.491: INFO: (3) /api/v1/namespaces/proxy-4995/services/proxy-service-xxrxn:portname1/proxy/: foo (200; 5.047902ms)
May 12 13:49:38.491: INFO: (3) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7/proxy/: test (200; 5.12186ms)
May 12 13:49:38.491: INFO: (3) /api/v1/namespaces/proxy-4995/services/proxy-service-xxrxn:portname2/proxy/: bar (200; 5.570274ms)
May 12 13:49:38.491: INFO: (3) /api/v1/namespaces/proxy-4995/services/https:proxy-service-xxrxn:tlsportname1/proxy/: tls baz (200; 5.666079ms)
May 12 13:49:38.491: INFO: (3) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:1080/proxy/: ... (200; 5.634113ms)
May 12 13:49:38.491: INFO: (3) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:460/proxy/: tls baz (200; 5.661169ms)
May 12 13:49:38.492: INFO: (3) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:443/proxy/: test<... (200; 4.002996ms)
May 12 13:49:38.496: INFO: (4) /api/v1/namespaces/proxy-4995/services/http:proxy-service-xxrxn:portname2/proxy/: bar (200; 3.991452ms)
May 12 13:49:38.496: INFO: (4) /api/v1/namespaces/proxy-4995/services/http:proxy-service-xxrxn:portname1/proxy/: foo (200; 4.06757ms)
May 12 13:49:38.496: INFO: (4) /api/v1/namespaces/proxy-4995/services/https:proxy-service-xxrxn:tlsportname2/proxy/: tls qux (200; 4.267294ms)
May 12 13:49:38.496: INFO: (4) /api/v1/namespaces/proxy-4995/services/proxy-service-xxrxn:portname1/proxy/: foo (200; 4.414828ms)
May 12 13:49:38.497: INFO: (4) /api/v1/namespaces/proxy-4995/services/https:proxy-service-xxrxn:tlsportname1/proxy/: tls baz (200; 5.346515ms)
May 12 13:49:38.497: INFO: (4) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 5.387089ms)
May 12 13:49:38.497: INFO: (4) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:462/proxy/: tls qux (200; 5.423714ms)
May 12 13:49:38.497: INFO: (4) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 5.595138ms)
May 12 13:49:38.497: INFO: (4) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:460/proxy/: tls baz (200; 5.601163ms)
May 12 13:49:38.497: INFO: (4) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:1080/proxy/: ... (200; 5.713838ms)
May 12 13:49:38.498: INFO: (4) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:162/proxy/: bar (200; 5.964724ms)
May 12 13:49:38.498: INFO: (4) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7/proxy/: test (200; 5.906105ms)
May 12 13:49:38.498: INFO: (4) /api/v1/namespaces/proxy-4995/services/proxy-service-xxrxn:portname2/proxy/: bar (200; 5.97967ms)
May 12 13:49:38.498: INFO: (4) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:443/proxy/: test (200; 3.501676ms)
May 12 13:49:38.501: INFO: (5) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:1080/proxy/: test<... (200; 3.551946ms)
May 12 13:49:38.501: INFO: (5) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 3.595068ms)
May 12 13:49:38.501: INFO: (5) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:1080/proxy/: ... (200; 3.596245ms)
May 12 13:49:38.501: INFO: (5) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:462/proxy/: tls qux (200; 3.774587ms)
May 12 13:49:38.501: INFO: (5) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:460/proxy/: tls baz (200; 3.707056ms)
May 12 13:49:38.502: INFO: (5) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 4.098741ms)
May 12 13:49:38.502: INFO: (5) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:443/proxy/: test (200; 4.263206ms)
May 12 13:49:38.507: INFO: (6) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 4.207406ms)
May 12 13:49:38.507: INFO: (6) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:443/proxy/: ... (200; 4.731497ms)
May 12 13:49:38.508: INFO: (6) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:162/proxy/: bar (200; 4.784632ms)
May 12 13:49:38.508: INFO: (6) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:1080/proxy/: test<... (200; 4.919252ms)
May 12 13:49:38.510: INFO: (6) /api/v1/namespaces/proxy-4995/services/proxy-service-xxrxn:portname2/proxy/: bar (200; 7.146981ms)
May 12 13:49:38.510: INFO: (6) /api/v1/namespaces/proxy-4995/services/proxy-service-xxrxn:portname1/proxy/: foo (200; 7.409397ms)
May 12 13:49:38.510: INFO: (6) /api/v1/namespaces/proxy-4995/services/https:proxy-service-xxrxn:tlsportname2/proxy/: tls qux (200; 7.333049ms)
May 12 13:49:38.511: INFO: (6) /api/v1/namespaces/proxy-4995/services/https:proxy-service-xxrxn:tlsportname1/proxy/: tls baz (200; 7.495481ms)
May 12 13:49:38.511: INFO: (6) /api/v1/namespaces/proxy-4995/services/http:proxy-service-xxrxn:portname1/proxy/: foo (200; 7.486613ms)
May 12 13:49:38.514: INFO: (7) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 3.07372ms)
May 12 13:49:38.515: INFO: (7) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:1080/proxy/: test<... (200; 3.923879ms)
May 12 13:49:38.515: INFO: (7) /api/v1/namespaces/proxy-4995/services/https:proxy-service-xxrxn:tlsportname1/proxy/: tls baz (200; 4.37308ms)
May 12 13:49:38.515: INFO: (7) /api/v1/namespaces/proxy-4995/services/http:proxy-service-xxrxn:portname2/proxy/: bar (200; 4.352466ms)
May 12 13:49:38.515: INFO: (7) /api/v1/namespaces/proxy-4995/services/proxy-service-xxrxn:portname1/proxy/: foo (200; 4.371522ms)
May 12 13:49:38.515: INFO: (7) /api/v1/namespaces/proxy-4995/services/proxy-service-xxrxn:portname2/proxy/: bar (200; 4.411158ms)
May 12 13:49:38.515: INFO: (7) /api/v1/namespaces/proxy-4995/services/http:proxy-service-xxrxn:portname1/proxy/: foo (200; 4.441622ms)
May 12 13:49:38.515: INFO: (7) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7/proxy/: test (200; 4.447692ms)
May 12 13:49:38.515: INFO: (7) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:443/proxy/: ... (200; 4.86223ms)
May 12 13:49:38.516: INFO: (7) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:162/proxy/: bar (200; 4.871749ms)
May 12 13:49:38.516: INFO: (7) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 4.915196ms)
May 12 13:49:38.516: INFO: (7) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:460/proxy/: tls baz (200; 4.948871ms)
May 12 13:49:38.516: INFO: (7) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:162/proxy/: bar (200; 4.904785ms)
May 12 13:49:38.516: INFO: (7) /api/v1/namespaces/proxy-4995/services/https:proxy-service-xxrxn:tlsportname2/proxy/: tls qux (200; 5.312036ms)
May 12 13:49:38.516: INFO: (7) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:462/proxy/: tls qux (200; 5.359909ms)
May 12 13:49:38.518: INFO: (8) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:162/proxy/: bar (200; 2.073424ms)
May 12 13:49:38.520: INFO: (8) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:1080/proxy/: test<... (200; 3.476531ms)
May 12 13:49:38.520: INFO: (8) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7/proxy/: test (200; 3.654127ms)
May 12 13:49:38.520: INFO: (8) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 3.884858ms)
May 12 13:49:38.520: INFO: (8) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:162/proxy/: bar (200; 3.913155ms)
May 12 13:49:38.520: INFO: (8) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:1080/proxy/: ... (200; 3.87752ms)
May 12 13:49:38.520: INFO: (8) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:443/proxy/: ... (200; 142.893714ms)
May 12 13:49:38.670: INFO: (9) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7/proxy/: test (200; 148.537254ms)
May 12 13:49:38.671: INFO: (9) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:1080/proxy/: test<... (200; 148.596158ms)
May 12 13:49:38.671: INFO: (9) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 148.717384ms)
May 12 13:49:38.671: INFO: (9) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:462/proxy/: tls qux (200; 148.953005ms)
May 12 13:49:38.671: INFO: (9) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:443/proxy/: ... (200; 3.71322ms)
May 12 13:49:38.678: INFO: (10) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:1080/proxy/: test<... (200; 3.939898ms)
May 12 13:49:38.678: INFO: (10) /api/v1/namespaces/proxy-4995/services/http:proxy-service-xxrxn:portname1/proxy/: foo (200; 4.07765ms)
May 12 13:49:38.678: INFO: (10) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 4.08489ms)
May 12 13:49:38.678: INFO: (10) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:462/proxy/: tls qux (200; 4.249177ms)
May 12 13:49:38.679: INFO: (10) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:443/proxy/: test (200; 4.917601ms)
May 12 13:49:38.679: INFO: (10) /api/v1/namespaces/proxy-4995/services/proxy-service-xxrxn:portname2/proxy/: bar (200; 4.980133ms)
May 12 13:49:38.679: INFO: (10) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 5.019963ms)
May 12 13:49:38.679: INFO: (10) /api/v1/namespaces/proxy-4995/services/https:proxy-service-xxrxn:tlsportname2/proxy/: tls qux (200; 4.962426ms)
May 12 13:49:38.683: INFO: (11) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:162/proxy/: bar (200; 3.301666ms)
May 12 13:49:38.683: INFO: (11) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 3.403965ms)
May 12 13:49:38.683: INFO: (11) /api/v1/namespaces/proxy-4995/services/http:proxy-service-xxrxn:portname2/proxy/: bar (200; 3.954984ms)
May 12 13:49:38.683: INFO: (11) /api/v1/namespaces/proxy-4995/services/proxy-service-xxrxn:portname2/proxy/: bar (200; 4.029365ms)
May 12 13:49:38.683: INFO: (11) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:1080/proxy/: ... (200; 4.067605ms)
May 12 13:49:38.683: INFO: (11) /api/v1/namespaces/proxy-4995/services/http:proxy-service-xxrxn:portname1/proxy/: foo (200; 4.138076ms)
May 12 13:49:38.683: INFO: (11) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:1080/proxy/: test<... (200; 4.114394ms)
May 12 13:49:38.683: INFO: (11) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7/proxy/: test (200; 4.162714ms)
May 12 13:49:38.683: INFO: (11) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:443/proxy/: ... (200; 5.107776ms)
May 12 13:49:38.689: INFO: (12) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:162/proxy/: bar (200; 5.188546ms)
May 12 13:49:38.689: INFO: (12) /api/v1/namespaces/proxy-4995/services/http:proxy-service-xxrxn:portname1/proxy/: foo (200; 5.308526ms)
May 12 13:49:38.689: INFO: (12) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:443/proxy/: test<... (200; 5.767358ms)
May 12 13:49:38.690: INFO: (12) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7/proxy/: test (200; 5.716595ms)
May 12 13:49:38.690: INFO: (12) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 5.823613ms)
May 12 13:49:38.690: INFO: (12) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:162/proxy/: bar (200; 5.750963ms)
May 12 13:49:38.690: INFO: (12) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:460/proxy/: tls baz (200; 5.793957ms)
May 12 13:49:38.690: INFO: (12) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 5.808334ms)
May 12 13:49:38.690: INFO: (12) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:462/proxy/: tls qux (200; 5.882963ms)
May 12 13:49:38.692: INFO: (13) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 2.265779ms)
May 12 13:49:38.692: INFO: (13) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:1080/proxy/: test<... (200; 2.400497ms)
May 12 13:49:38.692: INFO: (13) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:162/proxy/: bar (200; 2.567054ms)
May 12 13:49:38.693: INFO: (13) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:162/proxy/: bar (200; 2.872496ms)
May 12 13:49:38.693: INFO: (13) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:462/proxy/: tls qux (200; 2.915115ms)
May 12 13:49:38.693: INFO: (13) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 2.904342ms)
May 12 13:49:38.693: INFO: (13) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:460/proxy/: tls baz (200; 3.213689ms)
May 12 13:49:38.693: INFO: (13) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:1080/proxy/: ... (200; 3.224317ms)
May 12 13:49:38.693: INFO: (13) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7/proxy/: test (200; 3.288767ms)
May 12 13:49:38.693: INFO: (13) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:443/proxy/: ... (200; 3.551531ms)
May 12 13:49:38.698: INFO: (14) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:1080/proxy/: test<... (200; 3.551665ms)
May 12 13:49:38.699: INFO: (14) /api/v1/namespaces/proxy-4995/services/proxy-service-xxrxn:portname1/proxy/: foo (200; 4.269686ms)
May 12 13:49:38.699: INFO: (14) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 4.704077ms)
May 12 13:49:38.699: INFO: (14) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:462/proxy/: tls qux (200; 4.653479ms)
May 12 13:49:38.699: INFO: (14) /api/v1/namespaces/proxy-4995/services/https:proxy-service-xxrxn:tlsportname1/proxy/: tls baz (200; 4.65774ms)
May 12 13:49:38.699: INFO: (14) /api/v1/namespaces/proxy-4995/services/http:proxy-service-xxrxn:portname1/proxy/: foo (200; 4.710221ms)
May 12 13:49:38.699: INFO: (14) /api/v1/namespaces/proxy-4995/services/http:proxy-service-xxrxn:portname2/proxy/: bar (200; 4.70354ms)
May 12 13:49:38.699: INFO: (14) /api/v1/namespaces/proxy-4995/services/proxy-service-xxrxn:portname2/proxy/: bar (200; 4.821137ms)
May 12 13:49:38.699: INFO: (14) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7/proxy/: test (200; 4.85997ms)
May 12 13:49:38.699: INFO: (14) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 4.91587ms)
May 12 13:49:38.699: INFO: (14) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:460/proxy/: tls baz (200; 4.884439ms)
May 12 13:49:38.699: INFO: (14) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:443/proxy/: test<... (200; 3.124078ms)
May 12 13:49:38.703: INFO: (15) /api/v1/namespaces/proxy-4995/services/http:proxy-service-xxrxn:portname2/proxy/: bar (200; 3.742304ms)
May 12 13:49:38.703: INFO: (15) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:162/proxy/: bar (200; 3.721008ms)
May 12 13:49:38.703: INFO: (15) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 3.77126ms)
May 12 13:49:38.703: INFO: (15) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:162/proxy/: bar (200; 3.990551ms)
May 12 13:49:38.703: INFO: (15) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:462/proxy/: tls qux (200; 4.02566ms)
May 12 13:49:38.703: INFO: (15) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:460/proxy/: tls baz (200; 4.086385ms)
May 12 13:49:38.704: INFO: (15) /api/v1/namespaces/proxy-4995/services/https:proxy-service-xxrxn:tlsportname1/proxy/: tls baz (200; 4.125699ms)
May 12 13:49:38.704: INFO: (15) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7/proxy/: test (200; 4.24883ms)
May 12 13:49:38.704: INFO: (15) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:1080/proxy/: ... (200; 4.180706ms)
May 12 13:49:38.704: INFO: (15) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 4.235922ms)
May 12 13:49:38.704: INFO: (15) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:443/proxy/: test<... (200; 3.830278ms)
May 12 13:49:38.710: INFO: (16) /api/v1/namespaces/proxy-4995/services/http:proxy-service-xxrxn:portname2/proxy/: bar (200; 4.582319ms)
May 12 13:49:38.710: INFO: (16) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7/proxy/: test (200; 4.610424ms)
May 12 13:49:38.710: INFO: (16) /api/v1/namespaces/proxy-4995/services/proxy-service-xxrxn:portname2/proxy/: bar (200; 4.595195ms)
May 12 13:49:38.710: INFO: (16) /api/v1/namespaces/proxy-4995/services/http:proxy-service-xxrxn:portname1/proxy/: foo (200; 4.815121ms)
May 12 13:49:38.710: INFO: (16) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 4.859997ms)
May 12 13:49:38.710: INFO: (16) /api/v1/namespaces/proxy-4995/services/https:proxy-service-xxrxn:tlsportname2/proxy/: tls qux (200; 5.105771ms)
May 12 13:49:38.710: INFO: (16) /api/v1/namespaces/proxy-4995/services/proxy-service-xxrxn:portname1/proxy/: foo (200; 5.201887ms)
May 12 13:49:38.710: INFO: (16) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:1080/proxy/: ... (200; 5.149089ms)
May 12 13:49:38.711: INFO: (16) /api/v1/namespaces/proxy-4995/services/https:proxy-service-xxrxn:tlsportname1/proxy/: tls baz (200; 5.192315ms)
May 12 13:49:38.711: INFO: (16) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 5.464503ms)
May 12 13:49:38.711: INFO: (16) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:443/proxy/: ... (200; 14.485447ms)
May 12 13:49:38.726: INFO: (17) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:443/proxy/: test<... (200; 14.370716ms)
May 12 13:49:38.726: INFO: (17) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7/proxy/: test (200; 14.517475ms)
May 12 13:49:38.726: INFO: (17) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 14.893054ms)
May 12 13:49:38.727: INFO: (17) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:462/proxy/: tls qux (200; 15.390106ms)
May 12 13:49:38.727: INFO: (17) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:460/proxy/: tls baz (200; 15.337373ms)
May 12 13:49:38.727: INFO: (17) /api/v1/namespaces/proxy-4995/services/https:proxy-service-xxrxn:tlsportname1/proxy/: tls baz (200; 15.795271ms)
May 12 13:49:38.731: INFO: (18) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:443/proxy/: test (200; 3.493642ms)
May 12 13:49:38.731: INFO: (18) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:1080/proxy/: test<... (200; 3.453041ms)
May 12 13:49:38.732: INFO: (18) /api/v1/namespaces/proxy-4995/services/http:proxy-service-xxrxn:portname1/proxy/: foo (200; 4.418913ms)
May 12 13:49:38.732: INFO: (18) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 4.450412ms)
May 12 13:49:38.732: INFO: (18) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:162/proxy/: bar (200; 4.488258ms)
May 12 13:49:38.732: INFO: (18) /api/v1/namespaces/proxy-4995/services/https:proxy-service-xxrxn:tlsportname2/proxy/: tls qux (200; 4.462067ms)
May 12 13:49:38.732: INFO: (18) /api/v1/namespaces/proxy-4995/services/proxy-service-xxrxn:portname1/proxy/: foo (200; 4.584975ms)
May 12 13:49:38.732: INFO: (18) /api/v1/namespaces/proxy-4995/services/https:proxy-service-xxrxn:tlsportname1/proxy/: tls baz (200; 4.53716ms)
May 12 13:49:38.732: INFO: (18) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:162/proxy/: bar (200; 4.580283ms)
May 12 13:49:38.732: INFO: (18) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:460/proxy/: tls baz (200; 4.647239ms)
May 12 13:49:38.732: INFO: (18) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:462/proxy/: tls qux (200; 4.611073ms)
May 12 13:49:38.732: INFO: (18) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 4.681941ms)
May 12 13:49:38.732: INFO: (18) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:1080/proxy/: ... (200; 4.637605ms)
May 12 13:49:38.732: INFO: (18) /api/v1/namespaces/proxy-4995/services/http:proxy-service-xxrxn:portname2/proxy/: bar (200; 4.664914ms)
May 12 13:49:38.732: INFO: (18) /api/v1/namespaces/proxy-4995/services/proxy-service-xxrxn:portname2/proxy/: bar (200; 4.663426ms)
May 12 13:49:38.736: INFO: (19) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:162/proxy/: bar (200; 3.649059ms)
May 12 13:49:38.736: INFO: (19) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 3.794411ms)
May 12 13:49:38.736: INFO: (19) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:460/proxy/: tls baz (200; 3.767937ms)
May 12 13:49:38.736: INFO: (19) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:160/proxy/: foo (200; 4.274387ms)
May 12 13:49:38.736: INFO: (19) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:162/proxy/: bar (200; 4.33375ms)
May 12 13:49:38.736: INFO: (19) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:462/proxy/: tls qux (200; 4.36205ms)
May 12 13:49:38.737: INFO: (19) /api/v1/namespaces/proxy-4995/services/http:proxy-service-xxrxn:portname2/proxy/: bar (200; 5.119057ms)
May 12 13:49:38.737: INFO: (19) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7:1080/proxy/: test<... (200; 5.264948ms)
May 12 13:49:38.737: INFO: (19) /api/v1/namespaces/proxy-4995/services/https:proxy-service-xxrxn:tlsportname1/proxy/: tls baz (200; 5.51156ms)
May 12 13:49:38.737: INFO: (19) /api/v1/namespaces/proxy-4995/services/proxy-service-xxrxn:portname1/proxy/: foo (200; 5.476944ms)
May 12 13:49:38.737: INFO: (19) /api/v1/namespaces/proxy-4995/services/https:proxy-service-xxrxn:tlsportname2/proxy/: tls qux (200; 5.533838ms)
May 12 13:49:38.737: INFO: (19) /api/v1/namespaces/proxy-4995/services/proxy-service-xxrxn:portname2/proxy/: bar (200; 5.469611ms)
May 12 13:49:38.737: INFO: (19) /api/v1/namespaces/proxy-4995/pods/http:proxy-service-xxrxn-f2pw7:1080/proxy/: ... (200; 5.474662ms)
May 12 13:49:38.737: INFO: (19) /api/v1/namespaces/proxy-4995/pods/proxy-service-xxrxn-f2pw7/proxy/: test (200; 5.550458ms)
May 12 13:49:38.737: INFO: (19) /api/v1/namespaces/proxy-4995/services/http:proxy-service-xxrxn:portname1/proxy/: foo (200; 5.520748ms)
May 12 13:49:38.738: INFO: (19) /api/v1/namespaces/proxy-4995/pods/https:proxy-service-xxrxn-f2pw7:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-7e329b00-3c21-4ac6-826c-9853240115cc
STEP: Creating configMap with name cm-test-opt-upd-19e8294b-e096-49d6-964d-97deed8586bf
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-7e329b00-3c21-4ac6-826c-9853240115cc
STEP: Updating configmap cm-test-opt-upd-19e8294b-e096-49d6-964d-97deed8586bf
STEP: Creating configMap with name cm-test-opt-create-aaad5363-04e9-4339-b3b3-05945c56fe8e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:50:04.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6616" for this suite.

• [SLOW TEST:10.948 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":222,"skipped":3746,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:50:04.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
May 12 13:50:04.797: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
May 12 13:50:04.842: INFO: Waiting for terminating namespaces to be deleted...
May 12 13:50:04.844: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
May 12 13:50:04.850: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 12 13:50:04.850: INFO: 	Container kindnet-cni ready: true, restart count 1
May 12 13:50:04.850: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 12 13:50:04.850: INFO: 	Container kube-proxy ready: true, restart count 0
May 12 13:50:04.850: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
May 12 13:50:04.854: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 12 13:50:04.854: INFO: 	Container kindnet-cni ready: true, restart count 0
May 12 13:50:04.854: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 12 13:50:04.854: INFO: 	Container kube-proxy ready: true, restart count 0
May 12 13:50:04.854: INFO: pod-projected-configmaps-98871745-2e4d-4ca2-ac67-16b3a1a4b713 from projected-6616 started at 2020-05-12 13:49:54 +0000 UTC (3 container statuses recorded)
May 12 13:50:04.854: INFO: 	Container createcm-volume-test ready: true, restart count 0
May 12 13:50:04.854: INFO: 	Container delcm-volume-test ready: true, restart count 0
May 12 13:50:04.854: INFO: 	Container updcm-volume-test ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-051619ff-db0d-4f73-bac4-096f1febd702 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-051619ff-db0d-4f73-bac4-096f1febd702 off the node kali-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-051619ff-db0d-4f73-bac4-096f1febd702
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:50:19.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1509" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:14.882 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":275,"completed":223,"skipped":3757,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:50:19.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 12 13:50:22.926: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 12 13:50:25.601: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888223, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888223, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888223, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888222, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 12 13:50:27.970: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888223, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888223, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888223, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888222, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 12 13:50:30.664: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:50:30.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2217" for this suite.
STEP: Destroying namespace "webhook-2217-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.870 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":224,"skipped":3778,"failed":0}
SS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:50:31.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
STEP: reading a file in the container
May 12 13:50:40.756: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-962 pod-service-account-287fc253-6efa-404a-b7a9-3065b33142bd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
May 12 13:50:49.452: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-962 pod-service-account-287fc253-6efa-404a-b7a9-3065b33142bd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
May 12 13:50:49.661: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-962 pod-service-account-287fc253-6efa-404a-b7a9-3065b33142bd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:50:50.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-962" for this suite.

• [SLOW TEST:19.437 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":275,"completed":225,"skipped":3780,"failed":0}
SSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:50:50.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 12 13:50:51.631: INFO: Pod name rollover-pod: Found 0 pods out of 1
May 12 13:50:56.634: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
May 12 13:50:58.642: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
May 12 13:51:00.645: INFO: Creating deployment "test-rollover-deployment"
May 12 13:51:00.658: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
May 12 13:51:02.695: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
May 12 13:51:02.700: INFO: Ensure that both replica sets have 1 created replica
May 12 13:51:02.705: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
May 12 13:51:02.711: INFO: Updating deployment test-rollover-deployment
May 12 13:51:02.711: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
May 12 13:51:04.808: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
May 12 13:51:04.867: INFO: Make sure deployment "test-rollover-deployment" is complete
May 12 13:51:04.903: INFO: all replica sets need to contain the pod-template-hash label
May 12 13:51:04.903: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888260, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888260, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888264, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888260, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 12 13:51:07.214: INFO: all replica sets need to contain the pod-template-hash label
May 12 13:51:07.214: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888260, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888260, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888264, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888260, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 12 13:51:09.122: INFO: all replica sets need to contain the pod-template-hash label
May 12 13:51:09.123: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888260, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888260, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888264, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888260, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 12 13:51:10.910: INFO: all replica sets need to contain the pod-template-hash label
May 12 13:51:10.910: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888260, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888260, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888269, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888260, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 12 13:51:12.912: INFO: all replica sets need to contain the pod-template-hash label
May 12 13:51:12.912: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888260, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888260, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888269, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888260, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 12 13:51:14.909: INFO: all replica sets need to contain the pod-template-hash label
May 12 13:51:14.909: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888260, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888260, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888269, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888260, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 12 13:51:16.911: INFO: all replica sets need to contain the pod-template-hash label
May 12 13:51:16.911: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888260, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888260, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888269, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888260, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 12 13:51:18.910: INFO: all replica sets need to contain the pod-template-hash label
May 12 13:51:18.911: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888260, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888260, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888269, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888260, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 12 13:51:20.911: INFO: 
May 12 13:51:20.911: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
May 12 13:51:20.919: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-1057 /apis/apps/v1/namespaces/deployment-1057/deployments/test-rollover-deployment 65e4881f-e99a-4c09-bb9d-37b32f7966f1 3744345 2 2020-05-12 13:51:00 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-05-12 13:51:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-12 13:51:19 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0052c9c38  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-12 13:51:00 +0000 UTC,LastTransitionTime:2020-05-12 13:51:00 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-84f7f6f64b" has successfully progressed.,LastUpdateTime:2020-05-12 13:51:19 +0000 UTC,LastTransitionTime:2020-05-12 13:51:00 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

May 12 13:51:20.922: INFO: New ReplicaSet "test-rollover-deployment-84f7f6f64b" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-84f7f6f64b  deployment-1057 /apis/apps/v1/namespaces/deployment-1057/replicasets/test-rollover-deployment-84f7f6f64b 492e41f0-9a03-4311-9725-70c933c1a916 3744334 2 2020-05-12 13:51:02 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 65e4881f-e99a-4c09-bb9d-37b32f7966f1 0xc0050604b7 0xc0050604b8}] []  [{kube-controller-manager Update apps/v1 2020-05-12 13:51:19 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 53 101 52 56 56 49 102 45 101 57 57 97 45 52 99 48 57 45 98 98 57 100 45 51 55 98 51 50 102 55 57 54 54 102 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 84f7f6f64b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005060548  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
May 12 13:51:20.922: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
May 12 13:51:20.922: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-1057 /apis/apps/v1/namespaces/deployment-1057/replicasets/test-rollover-controller 2e99fe70-2fc9-4954-8aed-95f4dca25775 3744343 2 2020-05-12 13:50:51 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 65e4881f-e99a-4c09-bb9d-37b32f7966f1 0xc005060297 0xc005060298}] []  [{e2e.test Update apps/v1 2020-05-12 13:50:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-12 13:51:19 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 53 101 52 56 56 49 102 45 101 57 57 97 45 52 99 48 57 45 98 98 57 100 45 51 55 98 51 50 102 55 57 54 54 102 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005060338  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May 12 13:51:20.923: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5  deployment-1057 /apis/apps/v1/namespaces/deployment-1057/replicasets/test-rollover-deployment-5686c4cfd5 a8f71c88-b6a6-424b-a854-fc039e416382 3744277 2 2020-05-12 13:51:00 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 65e4881f-e99a-4c09-bb9d-37b32f7966f1 0xc0050603a7 0xc0050603a8}] []  [{kube-controller-manager Update apps/v1 2020-05-12 13:51:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 53 101 52 56 56 49 102 45 101 57 57 97 45 52 99 48 57 45 98 98 57 100 45 51 55 98 51 50 102 55 57 54 54 102 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 114 101 100 105 115 45 115 108 97 118 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005060438  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May 12 13:51:20.926: INFO: Pod "test-rollover-deployment-84f7f6f64b-fbnq2" is available:
&Pod{ObjectMeta:{test-rollover-deployment-84f7f6f64b-fbnq2 test-rollover-deployment-84f7f6f64b- deployment-1057 /api/v1/namespaces/deployment-1057/pods/test-rollover-deployment-84f7f6f64b-fbnq2 582e6aba-d73d-4fb6-a04b-33a18717c31e 3744302 0 2020-05-12 13:51:03 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [{apps/v1 ReplicaSet test-rollover-deployment-84f7f6f64b 492e41f0-9a03-4311-9725-70c933c1a916 0xc005060cc7 0xc005060cc8}] []  [{kube-controller-manager Update v1 2020-05-12 13:51:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 57 50 101 52 49 102 48 45 57 97 48 51 45 52 51 49 49 45 57 55 50 53 45 55 48 99 57 51 51 99 49 97 57 49 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-12 13:51:09 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 49 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rps6g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rps6g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rps6g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 13:51:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 13:51:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 13:51:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 13:51:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.112,StartTime:2020-05-12 13:51:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 13:51:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://db5648ef48d19dda11ba305bda475754c75b48334bfa716413007bb019709d9b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.112,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:51:20.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1057" for this suite.

• [SLOW TEST:29.990 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":226,"skipped":3783,"failed":0}
SSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:51:20.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8109 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8109;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8109 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8109;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8109.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8109.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8109.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8109.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8109.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8109.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8109.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8109.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8109.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8109.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8109.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8109.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8109.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 59.126.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.126.59_udp@PTR;check="$$(dig +tcp +noall +answer +search 59.126.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.126.59_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8109 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8109;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8109 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8109;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8109.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8109.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8109.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8109.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8109.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8109.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8109.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8109.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8109.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8109.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8109.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8109.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8109.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 59.126.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.126.59_udp@PTR;check="$$(dig +tcp +noall +answer +search 59.126.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.126.59_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May 12 13:51:30.492: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:30.495: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:30.498: INFO: Unable to read wheezy_udp@dns-test-service.dns-8109 from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:30.500: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8109 from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:30.503: INFO: Unable to read wheezy_udp@dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:30.505: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:30.508: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:30.510: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:30.526: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:30.528: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:30.531: INFO: Unable to read jessie_udp@dns-test-service.dns-8109 from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:30.533: INFO: Unable to read jessie_tcp@dns-test-service.dns-8109 from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:30.536: INFO: Unable to read jessie_udp@dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:30.539: INFO: Unable to read jessie_tcp@dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:30.541: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:30.543: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:30.559: INFO: Lookups using dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8109 wheezy_tcp@dns-test-service.dns-8109 wheezy_udp@dns-test-service.dns-8109.svc wheezy_tcp@dns-test-service.dns-8109.svc wheezy_udp@_http._tcp.dns-test-service.dns-8109.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8109.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8109 jessie_tcp@dns-test-service.dns-8109 jessie_udp@dns-test-service.dns-8109.svc jessie_tcp@dns-test-service.dns-8109.svc jessie_udp@_http._tcp.dns-test-service.dns-8109.svc jessie_tcp@_http._tcp.dns-test-service.dns-8109.svc]

May 12 13:51:35.993: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:35.998: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:36.143: INFO: Unable to read wheezy_udp@dns-test-service.dns-8109 from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:36.147: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8109 from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:36.150: INFO: Unable to read wheezy_udp@dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:36.153: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:36.156: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:36.158: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:36.176: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:36.179: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:36.183: INFO: Unable to read jessie_udp@dns-test-service.dns-8109 from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:36.185: INFO: Unable to read jessie_tcp@dns-test-service.dns-8109 from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:36.188: INFO: Unable to read jessie_udp@dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:36.191: INFO: Unable to read jessie_tcp@dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:36.194: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:36.196: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:36.213: INFO: Lookups using dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8109 wheezy_tcp@dns-test-service.dns-8109 wheezy_udp@dns-test-service.dns-8109.svc wheezy_tcp@dns-test-service.dns-8109.svc wheezy_udp@_http._tcp.dns-test-service.dns-8109.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8109.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8109 jessie_tcp@dns-test-service.dns-8109 jessie_udp@dns-test-service.dns-8109.svc jessie_tcp@dns-test-service.dns-8109.svc jessie_udp@_http._tcp.dns-test-service.dns-8109.svc jessie_tcp@_http._tcp.dns-test-service.dns-8109.svc]

May 12 13:51:40.564: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:40.567: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:40.571: INFO: Unable to read wheezy_udp@dns-test-service.dns-8109 from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:40.574: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8109 from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:40.576: INFO: Unable to read wheezy_udp@dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:40.578: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:40.580: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:40.583: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:40.597: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:40.599: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:40.601: INFO: Unable to read jessie_udp@dns-test-service.dns-8109 from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:40.603: INFO: Unable to read jessie_tcp@dns-test-service.dns-8109 from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:40.605: INFO: Unable to read jessie_udp@dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:40.607: INFO: Unable to read jessie_tcp@dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:40.608: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:40.610: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:40.639: INFO: Lookups using dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8109 wheezy_tcp@dns-test-service.dns-8109 wheezy_udp@dns-test-service.dns-8109.svc wheezy_tcp@dns-test-service.dns-8109.svc wheezy_udp@_http._tcp.dns-test-service.dns-8109.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8109.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8109 jessie_tcp@dns-test-service.dns-8109 jessie_udp@dns-test-service.dns-8109.svc jessie_tcp@dns-test-service.dns-8109.svc jessie_udp@_http._tcp.dns-test-service.dns-8109.svc jessie_tcp@_http._tcp.dns-test-service.dns-8109.svc]

May 12 13:51:45.565: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:45.570: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:45.574: INFO: Unable to read wheezy_udp@dns-test-service.dns-8109 from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:45.577: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8109 from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:45.580: INFO: Unable to read wheezy_udp@dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:45.583: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:45.585: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:45.588: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:45.630: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:45.633: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:45.636: INFO: Unable to read jessie_udp@dns-test-service.dns-8109 from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:45.638: INFO: Unable to read jessie_tcp@dns-test-service.dns-8109 from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:45.640: INFO: Unable to read jessie_udp@dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:45.642: INFO: Unable to read jessie_tcp@dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:45.690: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:45.809: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:45.878: INFO: Lookups using dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8109 wheezy_tcp@dns-test-service.dns-8109 wheezy_udp@dns-test-service.dns-8109.svc wheezy_tcp@dns-test-service.dns-8109.svc wheezy_udp@_http._tcp.dns-test-service.dns-8109.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8109.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8109 jessie_tcp@dns-test-service.dns-8109 jessie_udp@dns-test-service.dns-8109.svc jessie_tcp@dns-test-service.dns-8109.svc jessie_udp@_http._tcp.dns-test-service.dns-8109.svc jessie_tcp@_http._tcp.dns-test-service.dns-8109.svc]

May 12 13:51:50.564: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:50.566: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:50.570: INFO: Unable to read wheezy_udp@dns-test-service.dns-8109 from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:50.572: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8109 from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:50.575: INFO: Unable to read wheezy_udp@dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:50.579: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:50.582: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:50.585: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:50.604: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:50.607: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:50.609: INFO: Unable to read jessie_udp@dns-test-service.dns-8109 from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:50.612: INFO: Unable to read jessie_tcp@dns-test-service.dns-8109 from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:50.616: INFO: Unable to read jessie_udp@dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:50.619: INFO: Unable to read jessie_tcp@dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:50.623: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:50.627: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:50.641: INFO: Lookups using dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8109 wheezy_tcp@dns-test-service.dns-8109 wheezy_udp@dns-test-service.dns-8109.svc wheezy_tcp@dns-test-service.dns-8109.svc wheezy_udp@_http._tcp.dns-test-service.dns-8109.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8109.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8109 jessie_tcp@dns-test-service.dns-8109 jessie_udp@dns-test-service.dns-8109.svc jessie_tcp@dns-test-service.dns-8109.svc jessie_udp@_http._tcp.dns-test-service.dns-8109.svc jessie_tcp@_http._tcp.dns-test-service.dns-8109.svc]

May 12 13:51:55.737: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:55.742: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:55.750: INFO: Unable to read wheezy_udp@dns-test-service.dns-8109 from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:55.752: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8109 from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:55.780: INFO: Unable to read wheezy_udp@dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:55.828: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:55.832: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:55.834: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:55.955: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:55.957: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:55.960: INFO: Unable to read jessie_udp@dns-test-service.dns-8109 from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:55.962: INFO: Unable to read jessie_tcp@dns-test-service.dns-8109 from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:55.964: INFO: Unable to read jessie_udp@dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:55.967: INFO: Unable to read jessie_tcp@dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:55.969: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:55.971: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8109.svc from pod dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b: the server could not find the requested resource (get pods dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b)
May 12 13:51:55.983: INFO: Lookups using dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8109 wheezy_tcp@dns-test-service.dns-8109 wheezy_udp@dns-test-service.dns-8109.svc wheezy_tcp@dns-test-service.dns-8109.svc wheezy_udp@_http._tcp.dns-test-service.dns-8109.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8109.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8109 jessie_tcp@dns-test-service.dns-8109 jessie_udp@dns-test-service.dns-8109.svc jessie_tcp@dns-test-service.dns-8109.svc jessie_udp@_http._tcp.dns-test-service.dns-8109.svc jessie_tcp@_http._tcp.dns-test-service.dns-8109.svc]

May 12 13:52:00.652: INFO: DNS probes using dns-8109/dns-test-9dc6feeb-c1f5-4d6d-8550-6beb33cb0b2b succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:52:01.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8109" for this suite.

• [SLOW TEST:40.589 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":227,"skipped":3790,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:52:01.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 12 13:52:01.608: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7a0db3be-e24e-4150-9aab-6cfe86988243" in namespace "downward-api-4416" to be "Succeeded or Failed"
May 12 13:52:01.659: INFO: Pod "downwardapi-volume-7a0db3be-e24e-4150-9aab-6cfe86988243": Phase="Pending", Reason="", readiness=false. Elapsed: 50.965302ms
May 12 13:52:04.083: INFO: Pod "downwardapi-volume-7a0db3be-e24e-4150-9aab-6cfe86988243": Phase="Pending", Reason="", readiness=false. Elapsed: 2.475258338s
May 12 13:52:06.087: INFO: Pod "downwardapi-volume-7a0db3be-e24e-4150-9aab-6cfe86988243": Phase="Pending", Reason="", readiness=false. Elapsed: 4.478826742s
May 12 13:52:08.126: INFO: Pod "downwardapi-volume-7a0db3be-e24e-4150-9aab-6cfe86988243": Phase="Running", Reason="", readiness=true. Elapsed: 6.517871286s
May 12 13:52:10.210: INFO: Pod "downwardapi-volume-7a0db3be-e24e-4150-9aab-6cfe86988243": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.601852798s
STEP: Saw pod success
May 12 13:52:10.210: INFO: Pod "downwardapi-volume-7a0db3be-e24e-4150-9aab-6cfe86988243" satisfied condition "Succeeded or Failed"
May 12 13:52:10.213: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-7a0db3be-e24e-4150-9aab-6cfe86988243 container client-container: 
STEP: delete the pod
May 12 13:52:10.421: INFO: Waiting for pod downwardapi-volume-7a0db3be-e24e-4150-9aab-6cfe86988243 to disappear
May 12 13:52:10.441: INFO: Pod downwardapi-volume-7a0db3be-e24e-4150-9aab-6cfe86988243 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:52:10.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4416" for this suite.

• [SLOW TEST:8.926 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":228,"skipped":3805,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:52:10.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
May 12 13:52:11.095: INFO: Waiting up to 5m0s for pod "downward-api-990bbef0-18f5-4fa9-9ec5-e967e1fafbb9" in namespace "downward-api-7001" to be "Succeeded or Failed"
May 12 13:52:11.131: INFO: Pod "downward-api-990bbef0-18f5-4fa9-9ec5-e967e1fafbb9": Phase="Pending", Reason="", readiness=false. Elapsed: 35.614918ms
May 12 13:52:13.181: INFO: Pod "downward-api-990bbef0-18f5-4fa9-9ec5-e967e1fafbb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086300457s
May 12 13:52:15.184: INFO: Pod "downward-api-990bbef0-18f5-4fa9-9ec5-e967e1fafbb9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089205199s
May 12 13:52:17.188: INFO: Pod "downward-api-990bbef0-18f5-4fa9-9ec5-e967e1fafbb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.093255466s
STEP: Saw pod success
May 12 13:52:17.188: INFO: Pod "downward-api-990bbef0-18f5-4fa9-9ec5-e967e1fafbb9" satisfied condition "Succeeded or Failed"
May 12 13:52:17.191: INFO: Trying to get logs from node kali-worker2 pod downward-api-990bbef0-18f5-4fa9-9ec5-e967e1fafbb9 container dapi-container: 
STEP: delete the pod
May 12 13:52:17.231: INFO: Waiting for pod downward-api-990bbef0-18f5-4fa9-9ec5-e967e1fafbb9 to disappear
May 12 13:52:17.235: INFO: Pod downward-api-990bbef0-18f5-4fa9-9ec5-e967e1fafbb9 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:52:17.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7001" for this suite.

• [SLOW TEST:6.849 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":229,"skipped":3884,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:52:17.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with configMap that has name projected-configmap-test-upd-a390e117-b024-4ea8-a48e-a1abd1eb78d0
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-a390e117-b024-4ea8-a48e-a1abd1eb78d0
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:52:27.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7543" for this suite.

• [SLOW TEST:11.112 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":230,"skipped":3889,"failed":0}
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:52:28.412: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-projected-mdzq
STEP: Creating a pod to test atomic-volume-subpath
May 12 13:52:29.469: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-mdzq" in namespace "subpath-6792" to be "Succeeded or Failed"
May 12 13:52:29.701: INFO: Pod "pod-subpath-test-projected-mdzq": Phase="Pending", Reason="", readiness=false. Elapsed: 231.82495ms
May 12 13:52:31.785: INFO: Pod "pod-subpath-test-projected-mdzq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315704132s
May 12 13:52:33.801: INFO: Pod "pod-subpath-test-projected-mdzq": Phase="Running", Reason="", readiness=true. Elapsed: 4.331292433s
May 12 13:52:35.804: INFO: Pod "pod-subpath-test-projected-mdzq": Phase="Running", Reason="", readiness=true. Elapsed: 6.334508732s
May 12 13:52:37.810: INFO: Pod "pod-subpath-test-projected-mdzq": Phase="Running", Reason="", readiness=true. Elapsed: 8.34074713s
May 12 13:52:39.821: INFO: Pod "pod-subpath-test-projected-mdzq": Phase="Running", Reason="", readiness=true. Elapsed: 10.351930506s
May 12 13:52:42.078: INFO: Pod "pod-subpath-test-projected-mdzq": Phase="Running", Reason="", readiness=true. Elapsed: 12.608872727s
May 12 13:52:44.081: INFO: Pod "pod-subpath-test-projected-mdzq": Phase="Running", Reason="", readiness=true. Elapsed: 14.612085166s
May 12 13:52:46.085: INFO: Pod "pod-subpath-test-projected-mdzq": Phase="Running", Reason="", readiness=true. Elapsed: 16.615867438s
May 12 13:52:48.089: INFO: Pod "pod-subpath-test-projected-mdzq": Phase="Running", Reason="", readiness=true. Elapsed: 18.619296557s
May 12 13:52:50.092: INFO: Pod "pod-subpath-test-projected-mdzq": Phase="Running", Reason="", readiness=true. Elapsed: 20.622863097s
May 12 13:52:52.096: INFO: Pod "pod-subpath-test-projected-mdzq": Phase="Running", Reason="", readiness=true. Elapsed: 22.626500626s
May 12 13:52:54.157: INFO: Pod "pod-subpath-test-projected-mdzq": Phase="Running", Reason="", readiness=true. Elapsed: 24.6875311s
May 12 13:52:56.161: INFO: Pod "pod-subpath-test-projected-mdzq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.691852286s
STEP: Saw pod success
May 12 13:52:56.161: INFO: Pod "pod-subpath-test-projected-mdzq" satisfied condition "Succeeded or Failed"
May 12 13:52:56.164: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-projected-mdzq container test-container-subpath-projected-mdzq: 
STEP: delete the pod
May 12 13:52:56.290: INFO: Waiting for pod pod-subpath-test-projected-mdzq to disappear
May 12 13:52:56.332: INFO: Pod pod-subpath-test-projected-mdzq no longer exists
STEP: Deleting pod pod-subpath-test-projected-mdzq
May 12 13:52:56.332: INFO: Deleting pod "pod-subpath-test-projected-mdzq" in namespace "subpath-6792"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:52:56.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6792" for this suite.

• [SLOW TEST:27.929 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":231,"skipped":3893,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:52:56.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 12 13:52:56.949: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d42e1aa1-7351-43f2-940b-012aa34c210f" in namespace "projected-5857" to be "Succeeded or Failed"
May 12 13:52:57.432: INFO: Pod "downwardapi-volume-d42e1aa1-7351-43f2-940b-012aa34c210f": Phase="Pending", Reason="", readiness=false. Elapsed: 483.124177ms
May 12 13:52:59.514: INFO: Pod "downwardapi-volume-d42e1aa1-7351-43f2-940b-012aa34c210f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.56568839s
May 12 13:53:01.701: INFO: Pod "downwardapi-volume-d42e1aa1-7351-43f2-940b-012aa34c210f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.752624371s
May 12 13:53:03.853: INFO: Pod "downwardapi-volume-d42e1aa1-7351-43f2-940b-012aa34c210f": Phase="Running", Reason="", readiness=true. Elapsed: 6.904443694s
May 12 13:53:05.857: INFO: Pod "downwardapi-volume-d42e1aa1-7351-43f2-940b-012aa34c210f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.907708248s
STEP: Saw pod success
May 12 13:53:05.857: INFO: Pod "downwardapi-volume-d42e1aa1-7351-43f2-940b-012aa34c210f" satisfied condition "Succeeded or Failed"
May 12 13:53:05.859: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-d42e1aa1-7351-43f2-940b-012aa34c210f container client-container: 
STEP: delete the pod
May 12 13:53:05.887: INFO: Waiting for pod downwardapi-volume-d42e1aa1-7351-43f2-940b-012aa34c210f to disappear
May 12 13:53:05.903: INFO: Pod downwardapi-volume-d42e1aa1-7351-43f2-940b-012aa34c210f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:53:05.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5857" for this suite.

• [SLOW TEST:9.571 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":232,"skipped":3895,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:53:05.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
May 12 13:53:18.155: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May 12 13:53:18.177: INFO: Pod pod-with-poststart-http-hook still exists
May 12 13:53:20.177: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May 12 13:53:20.290: INFO: Pod pod-with-poststart-http-hook still exists
May 12 13:53:22.177: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May 12 13:53:22.181: INFO: Pod pod-with-poststart-http-hook still exists
May 12 13:53:24.177: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May 12 13:53:24.180: INFO: Pod pod-with-poststart-http-hook still exists
May 12 13:53:26.177: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May 12 13:53:26.182: INFO: Pod pod-with-poststart-http-hook still exists
May 12 13:53:28.177: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May 12 13:53:28.182: INFO: Pod pod-with-poststart-http-hook still exists
May 12 13:53:30.177: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May 12 13:53:30.182: INFO: Pod pod-with-poststart-http-hook still exists
May 12 13:53:32.177: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May 12 13:53:32.180: INFO: Pod pod-with-poststart-http-hook still exists
May 12 13:53:34.177: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May 12 13:53:34.182: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:53:34.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7803" for this suite.

• [SLOW TEST:28.278 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":233,"skipped":3928,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:53:34.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-9ca0ca35-9a1a-4d72-b1b2-3bb9f1535700
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-9ca0ca35-9a1a-4d72-b1b2-3bb9f1535700
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:53:43.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5444" for this suite.

• [SLOW TEST:9.214 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":234,"skipped":3941,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:53:43.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 12 13:53:43.760: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-432'
May 12 13:53:44.079: INFO: stderr: ""
May 12 13:53:44.079: INFO: stdout: "replicationcontroller/agnhost-master created\n"
May 12 13:53:44.079: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-432'
May 12 13:53:44.795: INFO: stderr: ""
May 12 13:53:44.795: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
May 12 13:53:45.798: INFO: Selector matched 1 pods for map[app:agnhost]
May 12 13:53:45.798: INFO: Found 0 / 1
May 12 13:53:46.799: INFO: Selector matched 1 pods for map[app:agnhost]
May 12 13:53:46.799: INFO: Found 0 / 1
May 12 13:53:47.799: INFO: Selector matched 1 pods for map[app:agnhost]
May 12 13:53:47.799: INFO: Found 0 / 1
May 12 13:53:48.876: INFO: Selector matched 1 pods for map[app:agnhost]
May 12 13:53:48.876: INFO: Found 1 / 1
May 12 13:53:48.876: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
May 12 13:53:48.879: INFO: Selector matched 1 pods for map[app:agnhost]
May 12 13:53:48.879: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
May 12 13:53:48.879: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe pod agnhost-master-jmfn2 --namespace=kubectl-432'
May 12 13:53:49.220: INFO: stderr: ""
May 12 13:53:49.220: INFO: stdout: "Name:         agnhost-master-jmfn2\nNamespace:    kubectl-432\nPriority:     0\nNode:         kali-worker/172.17.0.15\nStart Time:   Tue, 12 May 2020 13:53:44 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.244.2.119\nIPs:\n  IP:           10.244.2.119\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://36fed85899a7f614075638e73ab03742cbe829b1897cee036e29e6c4628a2d39\n    Image:          us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Image ID:       us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Tue, 12 May 2020 13:53:47 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-2pfsp (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-2pfsp:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-2pfsp\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                  Message\n  ----    ------     ----  ----                  -------\n  Normal  Scheduled  5s    default-scheduler     Successfully assigned kubectl-432/agnhost-master-jmfn2 to kali-worker\n  Normal  Pulled     4s    kubelet, kali-worker  Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n  Normal  Created    3s    kubelet, kali-worker  Created container agnhost-master\n  Normal  Started    2s    kubelet, kali-worker  Started container agnhost-master\n"
May 12 13:53:49.221: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-432'
May 12 13:53:49.371: INFO: stderr: ""
May 12 13:53:49.371: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-432\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  5s    replication-controller  Created pod: agnhost-master-jmfn2\n"
May 12 13:53:49.371: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-432'
May 12 13:53:49.470: INFO: stderr: ""
May 12 13:53:49.470: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-432\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.100.194.31\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.2.119:6379\nSession Affinity:  None\nEvents:            \n"
May 12 13:53:49.473: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe node kali-control-plane'
May 12 13:53:49.599: INFO: stderr: ""
May 12 13:53:49.599: INFO: stdout: "Name:               kali-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=kali-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Wed, 29 Apr 2020 09:30:59 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  kali-control-plane\n  AcquireTime:     \n  RenewTime:       Tue, 12 May 2020 13:53:41 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Tue, 12 May 2020 13:51:24 +0000   Wed, 29 Apr 2020 09:30:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Tue, 12 May 2020 13:51:24 +0000   Wed, 29 Apr 2020 09:30:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Tue, 12 May 2020 13:51:24 +0000   Wed, 29 Apr 2020 09:30:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Tue, 12 May 2020 13:51:24 +0000   Wed, 29 Apr 2020 09:31:34 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.17.0.19\n  Hostname:    kali-control-plane\nCapacity:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759892Ki\n  pods:               110\nAllocatable:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759892Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 2146cf85bed648199604ab2e0e9ac609\n  System UUID:                e83c0db4-babe-44fc-9dad-b5eeae6d23fd\n  Boot ID:                    ca2aa731-f890-4956-92a1-ff8c7560d571\n  Kernel Version:             4.15.0-88-generic\n  OS Image:                   Ubuntu 19.10\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.3-14-g449e9269\n  Kubelet Version:            v1.18.2\n  Kube-Proxy Version:         v1.18.2\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-66bff467f8-rvq2k                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     13d\n  kube-system                 coredns-66bff467f8-w6zxd                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     13d\n  kube-system                 etcd-kali-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13d\n  kube-system                 kindnet-65djz                                 100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      13d\n  kube-system                 kube-apiserver-kali-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         13d\n  kube-system                 kube-controller-manager-kali-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         13d\n  kube-system                 kube-proxy-pnhtq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13d\n  kube-system                 kube-scheduler-kali-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         13d\n  local-path-storage          local-path-provisioner-bd4bb6b75-6l9ph        0 (0%)        0 (0%)      0 (0%)           0 (0%)         13d\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:              \n"
May 12 13:53:49.599: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe namespace kubectl-432'
May 12 13:53:49.712: INFO: stderr: ""
May 12 13:53:49.713: INFO: stdout: "Name:         kubectl-432\nLabels:       e2e-framework=kubectl\n              e2e-run=252bd1e4-b42f-4955-b898-e36082558cf5\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:53:49.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-432" for this suite.

• [SLOW TEST:6.314 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:978
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":275,"completed":235,"skipped":3962,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:53:49.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
May 12 13:53:49.829: INFO: Waiting up to 5m0s for pod "pod-8c924bfd-422c-4900-b4ce-bb73d9b96b2c" in namespace "emptydir-3135" to be "Succeeded or Failed"
May 12 13:53:49.844: INFO: Pod "pod-8c924bfd-422c-4900-b4ce-bb73d9b96b2c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.400095ms
May 12 13:53:51.847: INFO: Pod "pod-8c924bfd-422c-4900-b4ce-bb73d9b96b2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018166609s
May 12 13:53:53.894: INFO: Pod "pod-8c924bfd-422c-4900-b4ce-bb73d9b96b2c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064968754s
May 12 13:53:55.898: INFO: Pod "pod-8c924bfd-422c-4900-b4ce-bb73d9b96b2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.069699742s
STEP: Saw pod success
May 12 13:53:55.898: INFO: Pod "pod-8c924bfd-422c-4900-b4ce-bb73d9b96b2c" satisfied condition "Succeeded or Failed"
May 12 13:53:55.901: INFO: Trying to get logs from node kali-worker2 pod pod-8c924bfd-422c-4900-b4ce-bb73d9b96b2c container test-container: 
STEP: delete the pod
May 12 13:53:56.174: INFO: Waiting for pod pod-8c924bfd-422c-4900-b4ce-bb73d9b96b2c to disappear
May 12 13:53:56.220: INFO: Pod pod-8c924bfd-422c-4900-b4ce-bb73d9b96b2c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:53:56.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3135" for this suite.

• [SLOW TEST:6.509 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":236,"skipped":3984,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:53:56.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
May 12 13:54:00.646: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:54:01.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1632" for this suite.

• [SLOW TEST:5.101 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":237,"skipped":4002,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:54:01.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:55:01.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-790" for this suite.

• [SLOW TEST:60.326 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":238,"skipped":4059,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:55:01.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
May 12 13:55:11.460: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:55:12.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3963" for this suite.

• [SLOW TEST:10.783 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":239,"skipped":4067,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:55:12.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
May 12 13:55:15.044: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
May 12 13:55:17.103: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888515, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888515, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888515, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888514, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 12 13:55:19.368: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888515, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888515, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888515, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888514, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 12 13:55:21.386: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888515, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888515, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888515, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888514, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 12 13:55:23.238: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888515, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888515, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888515, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888514, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 12 13:55:25.122: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888515, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888515, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888515, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888514, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 12 13:55:28.206: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 12 13:55:28.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:55:30.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-7545" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:19.238 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":240,"skipped":4077,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:55:31.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
May 12 13:55:33.146: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-9918 /api/v1/namespaces/watch-9918/configmaps/e2e-watch-test-configmap-a c0baf80e-38ea-4f18-93fb-879b101b398e 3745539 0 2020-05-12 13:55:33 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-12 13:55:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May 12 13:55:33.146: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-9918 /api/v1/namespaces/watch-9918/configmaps/e2e-watch-test-configmap-a c0baf80e-38ea-4f18-93fb-879b101b398e 3745539 0 2020-05-12 13:55:33 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-12 13:55:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
May 12 13:55:43.456: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-9918 /api/v1/namespaces/watch-9918/configmaps/e2e-watch-test-configmap-a c0baf80e-38ea-4f18-93fb-879b101b398e 3745576 0 2020-05-12 13:55:33 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-12 13:55:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
May 12 13:55:43.457: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-9918 /api/v1/namespaces/watch-9918/configmaps/e2e-watch-test-configmap-a c0baf80e-38ea-4f18-93fb-879b101b398e 3745576 0 2020-05-12 13:55:33 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-12 13:55:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
May 12 13:55:53.464: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-9918 /api/v1/namespaces/watch-9918/configmaps/e2e-watch-test-configmap-a c0baf80e-38ea-4f18-93fb-879b101b398e 3745606 0 2020-05-12 13:55:33 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-12 13:55:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May 12 13:55:53.464: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-9918 /api/v1/namespaces/watch-9918/configmaps/e2e-watch-test-configmap-a c0baf80e-38ea-4f18-93fb-879b101b398e 3745606 0 2020-05-12 13:55:33 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-12 13:55:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
May 12 13:56:03.471: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-9918 /api/v1/namespaces/watch-9918/configmaps/e2e-watch-test-configmap-a c0baf80e-38ea-4f18-93fb-879b101b398e 3745634 0 2020-05-12 13:55:33 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-12 13:55:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May 12 13:56:03.472: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-9918 /api/v1/namespaces/watch-9918/configmaps/e2e-watch-test-configmap-a c0baf80e-38ea-4f18-93fb-879b101b398e 3745634 0 2020-05-12 13:55:33 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-12 13:55:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
May 12 13:56:13.483: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-9918 /api/v1/namespaces/watch-9918/configmaps/e2e-watch-test-configmap-b 736ea852-cff4-42bd-9e89-f13ab7595db4 3745664 0 2020-05-12 13:56:13 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-05-12 13:56:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May 12 13:56:13.483: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-9918 /api/v1/namespaces/watch-9918/configmaps/e2e-watch-test-configmap-b 736ea852-cff4-42bd-9e89-f13ab7595db4 3745664 0 2020-05-12 13:56:13 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-05-12 13:56:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
May 12 13:56:23.490: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-9918 /api/v1/namespaces/watch-9918/configmaps/e2e-watch-test-configmap-b 736ea852-cff4-42bd-9e89-f13ab7595db4 3745694 0 2020-05-12 13:56:13 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-05-12 13:56:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May 12 13:56:23.490: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-9918 /api/v1/namespaces/watch-9918/configmaps/e2e-watch-test-configmap-b 736ea852-cff4-42bd-9e89-f13ab7595db4 3745694 0 2020-05-12 13:56:13 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-05-12 13:56:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:56:33.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9918" for this suite.

• [SLOW TEST:61.822 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":241,"skipped":4110,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:56:33.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 12 13:56:33.543: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:56:34.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1742" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":275,"completed":242,"skipped":4111,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:56:34.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on node default medium
May 12 13:56:34.729: INFO: Waiting up to 5m0s for pod "pod-604ced4e-f765-41e8-abf7-62f07e7db877" in namespace "emptydir-8791" to be "Succeeded or Failed"
May 12 13:56:34.733: INFO: Pod "pod-604ced4e-f765-41e8-abf7-62f07e7db877": Phase="Pending", Reason="", readiness=false. Elapsed: 3.929607ms
May 12 13:56:36.736: INFO: Pod "pod-604ced4e-f765-41e8-abf7-62f07e7db877": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007032044s
May 12 13:56:38.895: INFO: Pod "pod-604ced4e-f765-41e8-abf7-62f07e7db877": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.166146631s
STEP: Saw pod success
May 12 13:56:38.896: INFO: Pod "pod-604ced4e-f765-41e8-abf7-62f07e7db877" satisfied condition "Succeeded or Failed"
May 12 13:56:38.945: INFO: Trying to get logs from node kali-worker pod pod-604ced4e-f765-41e8-abf7-62f07e7db877 container test-container: 
STEP: delete the pod
May 12 13:56:38.987: INFO: Waiting for pod pod-604ced4e-f765-41e8-abf7-62f07e7db877 to disappear
May 12 13:56:39.076: INFO: Pod pod-604ced4e-f765-41e8-abf7-62f07e7db877 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:56:39.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8791" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":243,"skipped":4153,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:56:39.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-0e6bdd1e-6e3f-476a-9031-151cfb4d6844
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:56:45.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7703" for this suite.

• [SLOW TEST:6.704 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":244,"skipped":4186,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:56:45.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-8f3ba485-99ed-4a29-92fe-c7ac82080b6b
STEP: Creating a pod to test consume secrets
May 12 13:56:46.427: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-541b1ba4-32d6-4305-acb3-6a737600d950" in namespace "projected-7405" to be "Succeeded or Failed"
May 12 13:56:46.608: INFO: Pod "pod-projected-secrets-541b1ba4-32d6-4305-acb3-6a737600d950": Phase="Pending", Reason="", readiness=false. Elapsed: 181.044385ms
May 12 13:56:48.801: INFO: Pod "pod-projected-secrets-541b1ba4-32d6-4305-acb3-6a737600d950": Phase="Pending", Reason="", readiness=false. Elapsed: 2.373756544s
May 12 13:56:50.804: INFO: Pod "pod-projected-secrets-541b1ba4-32d6-4305-acb3-6a737600d950": Phase="Running", Reason="", readiness=true. Elapsed: 4.377598553s
May 12 13:56:52.808: INFO: Pod "pod-projected-secrets-541b1ba4-32d6-4305-acb3-6a737600d950": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.381627229s
STEP: Saw pod success
May 12 13:56:52.808: INFO: Pod "pod-projected-secrets-541b1ba4-32d6-4305-acb3-6a737600d950" satisfied condition "Succeeded or Failed"
May 12 13:56:52.810: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-541b1ba4-32d6-4305-acb3-6a737600d950 container projected-secret-volume-test: 
STEP: delete the pod
May 12 13:56:52.938: INFO: Waiting for pod pod-projected-secrets-541b1ba4-32d6-4305-acb3-6a737600d950 to disappear
May 12 13:56:52.968: INFO: Pod pod-projected-secrets-541b1ba4-32d6-4305-acb3-6a737600d950 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:56:52.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7405" for this suite.

• [SLOW TEST:7.158 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":245,"skipped":4192,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:56:52.981: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 12 13:56:53.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:56:57.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4536" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":246,"skipped":4221,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:56:57.390: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:57:08.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7915" for this suite.

• [SLOW TEST:11.304 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":247,"skipped":4270,"failed":0}
S
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:57:08.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 12 13:57:08.802: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fd5517d0-2cf7-40f0-a130-47e89f722070" in namespace "downward-api-7737" to be "Succeeded or Failed"
May 12 13:57:08.851: INFO: Pod "downwardapi-volume-fd5517d0-2cf7-40f0-a130-47e89f722070": Phase="Pending", Reason="", readiness=false. Elapsed: 48.492282ms
May 12 13:57:11.232: INFO: Pod "downwardapi-volume-fd5517d0-2cf7-40f0-a130-47e89f722070": Phase="Pending", Reason="", readiness=false. Elapsed: 2.429895476s
May 12 13:57:13.235: INFO: Pod "downwardapi-volume-fd5517d0-2cf7-40f0-a130-47e89f722070": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432816732s
May 12 13:57:15.238: INFO: Pod "downwardapi-volume-fd5517d0-2cf7-40f0-a130-47e89f722070": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.435986234s
STEP: Saw pod success
May 12 13:57:15.238: INFO: Pod "downwardapi-volume-fd5517d0-2cf7-40f0-a130-47e89f722070" satisfied condition "Succeeded or Failed"
May 12 13:57:15.240: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-fd5517d0-2cf7-40f0-a130-47e89f722070 container client-container: 
STEP: delete the pod
May 12 13:57:15.278: INFO: Waiting for pod downwardapi-volume-fd5517d0-2cf7-40f0-a130-47e89f722070 to disappear
May 12 13:57:15.292: INFO: Pod downwardapi-volume-fd5517d0-2cf7-40f0-a130-47e89f722070 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:57:15.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7737" for this suite.

• [SLOW TEST:6.611 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":248,"skipped":4271,"failed":0}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:57:15.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 12 13:57:15.429: INFO: Waiting up to 5m0s for pod "downwardapi-volume-253890da-0668-4365-b76c-5daa196f556f" in namespace "projected-8952" to be "Succeeded or Failed"
May 12 13:57:15.448: INFO: Pod "downwardapi-volume-253890da-0668-4365-b76c-5daa196f556f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.62209ms
May 12 13:57:17.494: INFO: Pod "downwardapi-volume-253890da-0668-4365-b76c-5daa196f556f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065288452s
May 12 13:57:19.498: INFO: Pod "downwardapi-volume-253890da-0668-4365-b76c-5daa196f556f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06895826s
STEP: Saw pod success
May 12 13:57:19.498: INFO: Pod "downwardapi-volume-253890da-0668-4365-b76c-5daa196f556f" satisfied condition "Succeeded or Failed"
May 12 13:57:19.502: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-253890da-0668-4365-b76c-5daa196f556f container client-container: 
STEP: delete the pod
May 12 13:57:19.670: INFO: Waiting for pod downwardapi-volume-253890da-0668-4365-b76c-5daa196f556f to disappear
May 12 13:57:19.865: INFO: Pod downwardapi-volume-253890da-0668-4365-b76c-5daa196f556f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:57:19.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8952" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":249,"skipped":4275,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:57:19.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 12 13:57:20.210: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ad51a469-d446-43ef-923d-4e40158df1b5" in namespace "downward-api-224" to be "Succeeded or Failed"
May 12 13:57:20.230: INFO: Pod "downwardapi-volume-ad51a469-d446-43ef-923d-4e40158df1b5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.487998ms
May 12 13:57:22.234: INFO: Pod "downwardapi-volume-ad51a469-d446-43ef-923d-4e40158df1b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024636754s
May 12 13:57:24.237: INFO: Pod "downwardapi-volume-ad51a469-d446-43ef-923d-4e40158df1b5": Phase="Running", Reason="", readiness=true. Elapsed: 4.027443433s
May 12 13:57:26.241: INFO: Pod "downwardapi-volume-ad51a469-d446-43ef-923d-4e40158df1b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03169407s
STEP: Saw pod success
May 12 13:57:26.241: INFO: Pod "downwardapi-volume-ad51a469-d446-43ef-923d-4e40158df1b5" satisfied condition "Succeeded or Failed"
May 12 13:57:26.244: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-ad51a469-d446-43ef-923d-4e40158df1b5 container client-container: 
STEP: delete the pod
May 12 13:57:26.286: INFO: Waiting for pod downwardapi-volume-ad51a469-d446-43ef-923d-4e40158df1b5 to disappear
May 12 13:57:26.314: INFO: Pod downwardapi-volume-ad51a469-d446-43ef-923d-4e40158df1b5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:57:26.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-224" for this suite.

• [SLOW TEST:6.448 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":250,"skipped":4301,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:57:26.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Starting the proxy
May 12 13:57:26.473: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix281021838/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:57:26.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-632" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":275,"completed":251,"skipped":4325,"failed":0}
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:57:26.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-41db4e6d-8375-4b15-993a-0e9ad98ebd36
STEP: Creating a pod to test consume configMaps
May 12 13:57:26.689: INFO: Waiting up to 5m0s for pod "pod-configmaps-e50d0653-36a7-47d3-b8a4-cb41e292e2da" in namespace "configmap-8282" to be "Succeeded or Failed"
May 12 13:57:26.693: INFO: Pod "pod-configmaps-e50d0653-36a7-47d3-b8a4-cb41e292e2da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010179ms
May 12 13:57:28.698: INFO: Pod "pod-configmaps-e50d0653-36a7-47d3-b8a4-cb41e292e2da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008646483s
May 12 13:57:30.701: INFO: Pod "pod-configmaps-e50d0653-36a7-47d3-b8a4-cb41e292e2da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011809379s
May 12 13:57:32.918: INFO: Pod "pod-configmaps-e50d0653-36a7-47d3-b8a4-cb41e292e2da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.228933478s
STEP: Saw pod success
May 12 13:57:32.918: INFO: Pod "pod-configmaps-e50d0653-36a7-47d3-b8a4-cb41e292e2da" satisfied condition "Succeeded or Failed"
May 12 13:57:32.920: INFO: Trying to get logs from node kali-worker pod pod-configmaps-e50d0653-36a7-47d3-b8a4-cb41e292e2da container configmap-volume-test: 
STEP: delete the pod
May 12 13:57:33.131: INFO: Waiting for pod pod-configmaps-e50d0653-36a7-47d3-b8a4-cb41e292e2da to disappear
May 12 13:57:33.344: INFO: Pod pod-configmaps-e50d0653-36a7-47d3-b8a4-cb41e292e2da no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:57:33.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8282" for this suite.

• [SLOW TEST:6.852 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":252,"skipped":4328,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:57:33.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-a8a894cb-6c3f-4bd2-9694-146cc97837e4
STEP: Creating a pod to test consume secrets
May 12 13:57:34.163: INFO: Waiting up to 5m0s for pod "pod-secrets-df70e012-92dd-4be8-89d1-52665ca1d5c1" in namespace "secrets-7012" to be "Succeeded or Failed"
May 12 13:57:34.499: INFO: Pod "pod-secrets-df70e012-92dd-4be8-89d1-52665ca1d5c1": Phase="Pending", Reason="", readiness=false. Elapsed: 335.579283ms
May 12 13:57:36.651: INFO: Pod "pod-secrets-df70e012-92dd-4be8-89d1-52665ca1d5c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.487820395s
May 12 13:57:38.676: INFO: Pod "pod-secrets-df70e012-92dd-4be8-89d1-52665ca1d5c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.513509915s
May 12 13:57:40.679: INFO: Pod "pod-secrets-df70e012-92dd-4be8-89d1-52665ca1d5c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.515821328s
STEP: Saw pod success
May 12 13:57:40.679: INFO: Pod "pod-secrets-df70e012-92dd-4be8-89d1-52665ca1d5c1" satisfied condition "Succeeded or Failed"
May 12 13:57:40.681: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-df70e012-92dd-4be8-89d1-52665ca1d5c1 container secret-volume-test: 
STEP: delete the pod
May 12 13:57:40.827: INFO: Waiting for pod pod-secrets-df70e012-92dd-4be8-89d1-52665ca1d5c1 to disappear
May 12 13:57:40.867: INFO: Pod pod-secrets-df70e012-92dd-4be8-89d1-52665ca1d5c1 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:57:40.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7012" for this suite.

• [SLOW TEST:7.466 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":253,"skipped":4338,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:57:40.874: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0512 13:57:51.252080       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May 12 13:57:51.252: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:57:51.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6119" for this suite.

• [SLOW TEST:10.417 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":254,"skipped":4382,"failed":0}
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:57:51.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
May 12 13:58:13.925: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-504 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 12 13:58:13.925: INFO: >>> kubeConfig: /root/.kube/config
I0512 13:58:14.268560       7 log.go:172] (0xc002a38000) (0xc002a70000) Create stream
I0512 13:58:14.268590       7 log.go:172] (0xc002a38000) (0xc002a70000) Stream added, broadcasting: 1
I0512 13:58:14.270521       7 log.go:172] (0xc002a38000) Reply frame received for 1
I0512 13:58:14.270563       7 log.go:172] (0xc002a38000) (0xc002a701e0) Create stream
I0512 13:58:14.270573       7 log.go:172] (0xc002a38000) (0xc002a701e0) Stream added, broadcasting: 3
I0512 13:58:14.271256       7 log.go:172] (0xc002a38000) Reply frame received for 3
I0512 13:58:14.271282       7 log.go:172] (0xc002a38000) (0xc002a70320) Create stream
I0512 13:58:14.271290       7 log.go:172] (0xc002a38000) (0xc002a70320) Stream added, broadcasting: 5
I0512 13:58:14.271931       7 log.go:172] (0xc002a38000) Reply frame received for 5
I0512 13:58:14.353454       7 log.go:172] (0xc002a38000) Data frame received for 5
I0512 13:58:14.353489       7 log.go:172] (0xc002a38000) Data frame received for 3
I0512 13:58:14.353527       7 log.go:172] (0xc002a701e0) (3) Data frame handling
I0512 13:58:14.353544       7 log.go:172] (0xc002a701e0) (3) Data frame sent
I0512 13:58:14.353556       7 log.go:172] (0xc002a38000) Data frame received for 3
I0512 13:58:14.353569       7 log.go:172] (0xc002a701e0) (3) Data frame handling
I0512 13:58:14.353590       7 log.go:172] (0xc002a70320) (5) Data frame handling
I0512 13:58:14.354774       7 log.go:172] (0xc002a38000) Data frame received for 1
I0512 13:58:14.354814       7 log.go:172] (0xc002a70000) (1) Data frame handling
I0512 13:58:14.354836       7 log.go:172] (0xc002a70000) (1) Data frame sent
I0512 13:58:14.354926       7 log.go:172] (0xc002a38000) (0xc002a70000) Stream removed, broadcasting: 1
I0512 13:58:14.354950       7 log.go:172] (0xc002a38000) Go away received
I0512 13:58:14.354982       7 log.go:172] (0xc002a38000) (0xc002a70000) Stream removed, broadcasting: 1
I0512 13:58:14.354997       7 log.go:172] (0xc002a38000) (0xc002a701e0) Stream removed, broadcasting: 3
I0512 13:58:14.355007       7 log.go:172] (0xc002a38000) (0xc002a70320) Stream removed, broadcasting: 5
May 12 13:58:14.355: INFO: Exec stderr: ""
May 12 13:58:14.355: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-504 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 12 13:58:14.355: INFO: >>> kubeConfig: /root/.kube/config
I0512 13:58:14.404594       7 log.go:172] (0xc0027d5600) (0xc000b735e0) Create stream
I0512 13:58:14.404636       7 log.go:172] (0xc0027d5600) (0xc000b735e0) Stream added, broadcasting: 1
I0512 13:58:14.410876       7 log.go:172] (0xc0027d5600) Reply frame received for 1
I0512 13:58:14.410939       7 log.go:172] (0xc0027d5600) (0xc002ab2000) Create stream
I0512 13:58:14.410974       7 log.go:172] (0xc0027d5600) (0xc002ab2000) Stream added, broadcasting: 3
I0512 13:58:14.417839       7 log.go:172] (0xc0027d5600) Reply frame received for 3
I0512 13:58:14.417869       7 log.go:172] (0xc0027d5600) (0xc002a703c0) Create stream
I0512 13:58:14.417884       7 log.go:172] (0xc0027d5600) (0xc002a703c0) Stream added, broadcasting: 5
I0512 13:58:14.418392       7 log.go:172] (0xc0027d5600) Reply frame received for 5
I0512 13:58:14.491017       7 log.go:172] (0xc0027d5600) Data frame received for 5
I0512 13:58:14.491068       7 log.go:172] (0xc002a703c0) (5) Data frame handling
I0512 13:58:14.491092       7 log.go:172] (0xc0027d5600) Data frame received for 3
I0512 13:58:14.491103       7 log.go:172] (0xc002ab2000) (3) Data frame handling
I0512 13:58:14.491120       7 log.go:172] (0xc002ab2000) (3) Data frame sent
I0512 13:58:14.491138       7 log.go:172] (0xc0027d5600) Data frame received for 3
I0512 13:58:14.491155       7 log.go:172] (0xc002ab2000) (3) Data frame handling
I0512 13:58:14.492727       7 log.go:172] (0xc0027d5600) Data frame received for 1
I0512 13:58:14.492750       7 log.go:172] (0xc000b735e0) (1) Data frame handling
I0512 13:58:14.492763       7 log.go:172] (0xc000b735e0) (1) Data frame sent
I0512 13:58:14.492784       7 log.go:172] (0xc0027d5600) (0xc000b735e0) Stream removed, broadcasting: 1
I0512 13:58:14.492803       7 log.go:172] (0xc0027d5600) Go away received
I0512 13:58:14.492961       7 log.go:172] (0xc0027d5600) (0xc000b735e0) Stream removed, broadcasting: 1
I0512 13:58:14.492989       7 log.go:172] (0xc0027d5600) (0xc002ab2000) Stream removed, broadcasting: 3
I0512 13:58:14.493023       7 log.go:172] (0xc0027d5600) (0xc002a703c0) Stream removed, broadcasting: 5
May 12 13:58:14.493: INFO: Exec stderr: ""
May 12 13:58:14.493: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-504 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 12 13:58:14.493: INFO: >>> kubeConfig: /root/.kube/config
I0512 13:58:14.522724       7 log.go:172] (0xc002b704d0) (0xc002a70780) Create stream
I0512 13:58:14.522758       7 log.go:172] (0xc002b704d0) (0xc002a70780) Stream added, broadcasting: 1
I0512 13:58:14.524205       7 log.go:172] (0xc002b704d0) Reply frame received for 1
I0512 13:58:14.524231       7 log.go:172] (0xc002b704d0) (0xc002a70820) Create stream
I0512 13:58:14.524240       7 log.go:172] (0xc002b704d0) (0xc002a70820) Stream added, broadcasting: 3
I0512 13:58:14.525024       7 log.go:172] (0xc002b704d0) Reply frame received for 3
I0512 13:58:14.525052       7 log.go:172] (0xc002b704d0) (0xc000b737c0) Create stream
I0512 13:58:14.525063       7 log.go:172] (0xc002b704d0) (0xc000b737c0) Stream added, broadcasting: 5
I0512 13:58:14.526033       7 log.go:172] (0xc002b704d0) Reply frame received for 5
I0512 13:58:14.599257       7 log.go:172] (0xc002b704d0) Data frame received for 5
I0512 13:58:14.599301       7 log.go:172] (0xc000b737c0) (5) Data frame handling
I0512 13:58:14.599346       7 log.go:172] (0xc002b704d0) Data frame received for 3
I0512 13:58:14.599363       7 log.go:172] (0xc002a70820) (3) Data frame handling
I0512 13:58:14.599383       7 log.go:172] (0xc002a70820) (3) Data frame sent
I0512 13:58:14.599398       7 log.go:172] (0xc002b704d0) Data frame received for 3
I0512 13:58:14.599411       7 log.go:172] (0xc002a70820) (3) Data frame handling
I0512 13:58:14.600618       7 log.go:172] (0xc002b704d0) Data frame received for 1
I0512 13:58:14.600649       7 log.go:172] (0xc002a70780) (1) Data frame handling
I0512 13:58:14.600668       7 log.go:172] (0xc002a70780) (1) Data frame sent
I0512 13:58:14.600683       7 log.go:172] (0xc002b704d0) (0xc002a70780) Stream removed, broadcasting: 1
I0512 13:58:14.600703       7 log.go:172] (0xc002b704d0) Go away received
I0512 13:58:14.600820       7 log.go:172] (0xc002b704d0) (0xc002a70780) Stream removed, broadcasting: 1
I0512 13:58:14.600836       7 log.go:172] (0xc002b704d0) (0xc002a70820) Stream removed, broadcasting: 3
I0512 13:58:14.600845       7 log.go:172] (0xc002b704d0) (0xc000b737c0) Stream removed, broadcasting: 5
May 12 13:58:14.600: INFO: Exec stderr: ""
May 12 13:58:14.600: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-504 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 12 13:58:14.600: INFO: >>> kubeConfig: /root/.kube/config
I0512 13:58:14.628100       7 log.go:172] (0xc002fd0420) (0xc00036bae0) Create stream
I0512 13:58:14.628146       7 log.go:172] (0xc002fd0420) (0xc00036bae0) Stream added, broadcasting: 1
I0512 13:58:14.630492       7 log.go:172] (0xc002fd0420) Reply frame received for 1
I0512 13:58:14.630533       7 log.go:172] (0xc002fd0420) (0xc00036bd60) Create stream
I0512 13:58:14.630551       7 log.go:172] (0xc002fd0420) (0xc00036bd60) Stream added, broadcasting: 3
I0512 13:58:14.631478       7 log.go:172] (0xc002fd0420) Reply frame received for 3
I0512 13:58:14.631516       7 log.go:172] (0xc002fd0420) (0xc000b73860) Create stream
I0512 13:58:14.631529       7 log.go:172] (0xc002fd0420) (0xc000b73860) Stream added, broadcasting: 5
I0512 13:58:14.632424       7 log.go:172] (0xc002fd0420) Reply frame received for 5
I0512 13:58:14.680887       7 log.go:172] (0xc002fd0420) Data frame received for 5
I0512 13:58:14.680935       7 log.go:172] (0xc000b73860) (5) Data frame handling
I0512 13:58:14.680969       7 log.go:172] (0xc002fd0420) Data frame received for 3
I0512 13:58:14.680991       7 log.go:172] (0xc00036bd60) (3) Data frame handling
I0512 13:58:14.681016       7 log.go:172] (0xc00036bd60) (3) Data frame sent
I0512 13:58:14.681032       7 log.go:172] (0xc002fd0420) Data frame received for 3
I0512 13:58:14.681047       7 log.go:172] (0xc00036bd60) (3) Data frame handling
I0512 13:58:14.682495       7 log.go:172] (0xc002fd0420) Data frame received for 1
I0512 13:58:14.682515       7 log.go:172] (0xc00036bae0) (1) Data frame handling
I0512 13:58:14.682531       7 log.go:172] (0xc00036bae0) (1) Data frame sent
I0512 13:58:14.682548       7 log.go:172] (0xc002fd0420) (0xc00036bae0) Stream removed, broadcasting: 1
I0512 13:58:14.682569       7 log.go:172] (0xc002fd0420) Go away received
I0512 13:58:14.682686       7 log.go:172] (0xc002fd0420) (0xc00036bae0) Stream removed, broadcasting: 1
I0512 13:58:14.682723       7 log.go:172] (0xc002fd0420) (0xc00036bd60) Stream removed, broadcasting: 3
I0512 13:58:14.682739       7 log.go:172] (0xc002fd0420) (0xc000b73860) Stream removed, broadcasting: 5
May 12 13:58:14.682: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
May 12 13:58:14.682: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-504 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 12 13:58:14.682: INFO: >>> kubeConfig: /root/.kube/config
I0512 13:58:14.709372       7 log.go:172] (0xc002b70c60) (0xc002a70a00) Create stream
I0512 13:58:14.709405       7 log.go:172] (0xc002b70c60) (0xc002a70a00) Stream added, broadcasting: 1
I0512 13:58:14.711313       7 log.go:172] (0xc002b70c60) Reply frame received for 1
I0512 13:58:14.711351       7 log.go:172] (0xc002b70c60) (0xc00036bea0) Create stream
I0512 13:58:14.711365       7 log.go:172] (0xc002b70c60) (0xc00036bea0) Stream added, broadcasting: 3
I0512 13:58:14.712108       7 log.go:172] (0xc002b70c60) Reply frame received for 3
I0512 13:58:14.712139       7 log.go:172] (0xc002b70c60) (0xc000187400) Create stream
I0512 13:58:14.712149       7 log.go:172] (0xc002b70c60) (0xc000187400) Stream added, broadcasting: 5
I0512 13:58:14.713676       7 log.go:172] (0xc002b70c60) Reply frame received for 5
I0512 13:58:14.757745       7 log.go:172] (0xc002b70c60) Data frame received for 3
I0512 13:58:14.757783       7 log.go:172] (0xc00036bea0) (3) Data frame handling
I0512 13:58:14.757797       7 log.go:172] (0xc00036bea0) (3) Data frame sent
I0512 13:58:14.757807       7 log.go:172] (0xc002b70c60) Data frame received for 3
I0512 13:58:14.757816       7 log.go:172] (0xc00036bea0) (3) Data frame handling
I0512 13:58:14.757863       7 log.go:172] (0xc002b70c60) Data frame received for 5
I0512 13:58:14.757884       7 log.go:172] (0xc000187400) (5) Data frame handling
I0512 13:58:14.758770       7 log.go:172] (0xc002b70c60) Data frame received for 1
I0512 13:58:14.758793       7 log.go:172] (0xc002a70a00) (1) Data frame handling
I0512 13:58:14.758810       7 log.go:172] (0xc002a70a00) (1) Data frame sent
I0512 13:58:14.758822       7 log.go:172] (0xc002b70c60) (0xc002a70a00) Stream removed, broadcasting: 1
I0512 13:58:14.758841       7 log.go:172] (0xc002b70c60) Go away received
I0512 13:58:14.758976       7 log.go:172] (0xc002b70c60) (0xc002a70a00) Stream removed, broadcasting: 1
I0512 13:58:14.758995       7 log.go:172] (0xc002b70c60) (0xc00036bea0) Stream removed, broadcasting: 3
I0512 13:58:14.759005       7 log.go:172] (0xc002b70c60) (0xc000187400) Stream removed, broadcasting: 5
May 12 13:58:14.759: INFO: Exec stderr: ""
May 12 13:58:14.759: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-504 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 12 13:58:14.759: INFO: >>> kubeConfig: /root/.kube/config
I0512 13:58:14.781908       7 log.go:172] (0xc002b713f0) (0xc002a70be0) Create stream
I0512 13:58:14.781949       7 log.go:172] (0xc002b713f0) (0xc002a70be0) Stream added, broadcasting: 1
I0512 13:58:14.784206       7 log.go:172] (0xc002b713f0) Reply frame received for 1
I0512 13:58:14.784251       7 log.go:172] (0xc002b713f0) (0xc002a70c80) Create stream
I0512 13:58:14.784270       7 log.go:172] (0xc002b713f0) (0xc002a70c80) Stream added, broadcasting: 3
I0512 13:58:14.784907       7 log.go:172] (0xc002b713f0) Reply frame received for 3
I0512 13:58:14.784936       7 log.go:172] (0xc002b713f0) (0xc002a70d20) Create stream
I0512 13:58:14.784949       7 log.go:172] (0xc002b713f0) (0xc002a70d20) Stream added, broadcasting: 5
I0512 13:58:14.785751       7 log.go:172] (0xc002b713f0) Reply frame received for 5
I0512 13:58:14.864300       7 log.go:172] (0xc002b713f0) Data frame received for 5
I0512 13:58:14.864337       7 log.go:172] (0xc002a70d20) (5) Data frame handling
I0512 13:58:14.864357       7 log.go:172] (0xc002b713f0) Data frame received for 3
I0512 13:58:14.864366       7 log.go:172] (0xc002a70c80) (3) Data frame handling
I0512 13:58:14.864376       7 log.go:172] (0xc002a70c80) (3) Data frame sent
I0512 13:58:14.864385       7 log.go:172] (0xc002b713f0) Data frame received for 3
I0512 13:58:14.864397       7 log.go:172] (0xc002a70c80) (3) Data frame handling
I0512 13:58:14.865567       7 log.go:172] (0xc002b713f0) Data frame received for 1
I0512 13:58:14.865583       7 log.go:172] (0xc002a70be0) (1) Data frame handling
I0512 13:58:14.865592       7 log.go:172] (0xc002a70be0) (1) Data frame sent
I0512 13:58:14.865609       7 log.go:172] (0xc002b713f0) (0xc002a70be0) Stream removed, broadcasting: 1
I0512 13:58:14.865630       7 log.go:172] (0xc002b713f0) Go away received
I0512 13:58:14.865742       7 log.go:172] (0xc002b713f0) (0xc002a70be0) Stream removed, broadcasting: 1
I0512 13:58:14.865772       7 log.go:172] (0xc002b713f0) (0xc002a70c80) Stream removed, broadcasting: 3
I0512 13:58:14.865792       7 log.go:172] (0xc002b713f0) (0xc002a70d20) Stream removed, broadcasting: 5
May 12 13:58:14.865: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
May 12 13:58:14.865: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-504 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 12 13:58:14.865: INFO: >>> kubeConfig: /root/.kube/config
I0512 13:58:15.303927       7 log.go:172] (0xc0027d5ef0) (0xc000b73c20) Create stream
I0512 13:58:15.303967       7 log.go:172] (0xc0027d5ef0) (0xc000b73c20) Stream added, broadcasting: 1
I0512 13:58:15.306024       7 log.go:172] (0xc0027d5ef0) Reply frame received for 1
I0512 13:58:15.306070       7 log.go:172] (0xc0027d5ef0) (0xc002ab20a0) Create stream
I0512 13:58:15.306089       7 log.go:172] (0xc0027d5ef0) (0xc002ab20a0) Stream added, broadcasting: 3
I0512 13:58:15.307091       7 log.go:172] (0xc0027d5ef0) Reply frame received for 3
I0512 13:58:15.307136       7 log.go:172] (0xc0027d5ef0) (0xc000187720) Create stream
I0512 13:58:15.307149       7 log.go:172] (0xc0027d5ef0) (0xc000187720) Stream added, broadcasting: 5
I0512 13:58:15.307998       7 log.go:172] (0xc0027d5ef0) Reply frame received for 5
I0512 13:58:15.364599       7 log.go:172] (0xc0027d5ef0) Data frame received for 5
I0512 13:58:15.364637       7 log.go:172] (0xc000187720) (5) Data frame handling
I0512 13:58:15.364661       7 log.go:172] (0xc0027d5ef0) Data frame received for 3
I0512 13:58:15.364679       7 log.go:172] (0xc002ab20a0) (3) Data frame handling
I0512 13:58:15.364693       7 log.go:172] (0xc002ab20a0) (3) Data frame sent
I0512 13:58:15.364704       7 log.go:172] (0xc0027d5ef0) Data frame received for 3
I0512 13:58:15.364714       7 log.go:172] (0xc002ab20a0) (3) Data frame handling
I0512 13:58:15.366174       7 log.go:172] (0xc0027d5ef0) Data frame received for 1
I0512 13:58:15.366200       7 log.go:172] (0xc000b73c20) (1) Data frame handling
I0512 13:58:15.366226       7 log.go:172] (0xc000b73c20) (1) Data frame sent
I0512 13:58:15.366258       7 log.go:172] (0xc0027d5ef0) (0xc000b73c20) Stream removed, broadcasting: 1
I0512 13:58:15.366365       7 log.go:172] (0xc0027d5ef0) (0xc000b73c20) Stream removed, broadcasting: 1
I0512 13:58:15.366385       7 log.go:172] (0xc0027d5ef0) (0xc002ab20a0) Stream removed, broadcasting: 3
I0512 13:58:15.366506       7 log.go:172] (0xc0027d5ef0) (0xc000187720) Stream removed, broadcasting: 5
I0512 13:58:15.366633       7 log.go:172] (0xc0027d5ef0) Go away received
May 12 13:58:15.366: INFO: Exec stderr: ""
May 12 13:58:15.366: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-504 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 12 13:58:15.366: INFO: >>> kubeConfig: /root/.kube/config
I0512 13:58:15.394073       7 log.go:172] (0xc002dd4580) (0xc000252640) Create stream
I0512 13:58:15.394098       7 log.go:172] (0xc002dd4580) (0xc000252640) Stream added, broadcasting: 1
I0512 13:58:15.395513       7 log.go:172] (0xc002dd4580) Reply frame received for 1
I0512 13:58:15.395546       7 log.go:172] (0xc002dd4580) (0xc0002528c0) Create stream
I0512 13:58:15.395558       7 log.go:172] (0xc002dd4580) (0xc0002528c0) Stream added, broadcasting: 3
I0512 13:58:15.396137       7 log.go:172] (0xc002dd4580) Reply frame received for 3
I0512 13:58:15.396164       7 log.go:172] (0xc002dd4580) (0xc000252be0) Create stream
I0512 13:58:15.396172       7 log.go:172] (0xc002dd4580) (0xc000252be0) Stream added, broadcasting: 5
I0512 13:58:15.396755       7 log.go:172] (0xc002dd4580) Reply frame received for 5
I0512 13:58:15.444414       7 log.go:172] (0xc002dd4580) Data frame received for 5
I0512 13:58:15.444460       7 log.go:172] (0xc000252be0) (5) Data frame handling
I0512 13:58:15.444488       7 log.go:172] (0xc002dd4580) Data frame received for 3
I0512 13:58:15.444507       7 log.go:172] (0xc0002528c0) (3) Data frame handling
I0512 13:58:15.444542       7 log.go:172] (0xc0002528c0) (3) Data frame sent
I0512 13:58:15.444562       7 log.go:172] (0xc002dd4580) Data frame received for 3
I0512 13:58:15.444577       7 log.go:172] (0xc0002528c0) (3) Data frame handling
I0512 13:58:15.445711       7 log.go:172] (0xc002dd4580) Data frame received for 1
I0512 13:58:15.445732       7 log.go:172] (0xc000252640) (1) Data frame handling
I0512 13:58:15.445749       7 log.go:172] (0xc000252640) (1) Data frame sent
I0512 13:58:15.445865       7 log.go:172] (0xc002dd4580) (0xc000252640) Stream removed, broadcasting: 1
I0512 13:58:15.445945       7 log.go:172] (0xc002dd4580) Go away received
I0512 13:58:15.445994       7 log.go:172] (0xc002dd4580) (0xc000252640) Stream removed, broadcasting: 1
I0512 13:58:15.446026       7 log.go:172] (0xc002dd4580) (0xc0002528c0) Stream removed, broadcasting: 3
I0512 13:58:15.446059       7 log.go:172] (0xc002dd4580) (0xc000252be0) Stream removed, broadcasting: 5
May 12 13:58:15.446: INFO: Exec stderr: ""
May 12 13:58:15.446: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-504 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 12 13:58:15.446: INFO: >>> kubeConfig: /root/.kube/config
I0512 13:58:15.523295       7 log.go:172] (0xc002e584d0) (0xc001a82f00) Create stream
I0512 13:58:15.523319       7 log.go:172] (0xc002e584d0) (0xc001a82f00) Stream added, broadcasting: 1
I0512 13:58:15.524720       7 log.go:172] (0xc002e584d0) Reply frame received for 1
I0512 13:58:15.524750       7 log.go:172] (0xc002e584d0) (0xc002a70dc0) Create stream
I0512 13:58:15.524763       7 log.go:172] (0xc002e584d0) (0xc002a70dc0) Stream added, broadcasting: 3
I0512 13:58:15.525540       7 log.go:172] (0xc002e584d0) Reply frame received for 3
I0512 13:58:15.525580       7 log.go:172] (0xc002e584d0) (0xc0001879a0) Create stream
I0512 13:58:15.525611       7 log.go:172] (0xc002e584d0) (0xc0001879a0) Stream added, broadcasting: 5
I0512 13:58:15.526291       7 log.go:172] (0xc002e584d0) Reply frame received for 5
I0512 13:58:15.581410       7 log.go:172] (0xc002e584d0) Data frame received for 3
I0512 13:58:15.581466       7 log.go:172] (0xc002a70dc0) (3) Data frame handling
I0512 13:58:15.581494       7 log.go:172] (0xc002a70dc0) (3) Data frame sent
I0512 13:58:15.581521       7 log.go:172] (0xc002e584d0) Data frame received for 3
I0512 13:58:15.581548       7 log.go:172] (0xc002e584d0) Data frame received for 5
I0512 13:58:15.581591       7 log.go:172] (0xc0001879a0) (5) Data frame handling
I0512 13:58:15.581614       7 log.go:172] (0xc002a70dc0) (3) Data frame handling
I0512 13:58:15.582899       7 log.go:172] (0xc002e584d0) Data frame received for 1
I0512 13:58:15.582919       7 log.go:172] (0xc001a82f00) (1) Data frame handling
I0512 13:58:15.582946       7 log.go:172] (0xc001a82f00) (1) Data frame sent
I0512 13:58:15.582958       7 log.go:172] (0xc002e584d0) (0xc001a82f00) Stream removed, broadcasting: 1
I0512 13:58:15.583028       7 log.go:172] (0xc002e584d0) (0xc001a82f00) Stream removed, broadcasting: 1
I0512 13:58:15.583043       7 log.go:172] (0xc002e584d0) (0xc002a70dc0) Stream removed, broadcasting: 3
I0512 13:58:15.583052       7 log.go:172] (0xc002e584d0) (0xc0001879a0) Stream removed, broadcasting: 5
May 12 13:58:15.583: INFO: Exec stderr: ""
May 12 13:58:15.583: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-504 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 12 13:58:15.583: INFO: >>> kubeConfig: /root/.kube/config
I0512 13:58:15.584612       7 log.go:172] (0xc002e584d0) Go away received
I0512 13:58:15.617423       7 log.go:172] (0xc002b71810) (0xc002a70f00) Create stream
I0512 13:58:15.617457       7 log.go:172] (0xc002b71810) (0xc002a70f00) Stream added, broadcasting: 1
I0512 13:58:15.619164       7 log.go:172] (0xc002b71810) Reply frame received for 1
I0512 13:58:15.619199       7 log.go:172] (0xc002b71810) (0xc002a71040) Create stream
I0512 13:58:15.619216       7 log.go:172] (0xc002b71810) (0xc002a71040) Stream added, broadcasting: 3
I0512 13:58:15.619964       7 log.go:172] (0xc002b71810) Reply frame received for 3
I0512 13:58:15.620013       7 log.go:172] (0xc002b71810) (0xc002a710e0) Create stream
I0512 13:58:15.620031       7 log.go:172] (0xc002b71810) (0xc002a710e0) Stream added, broadcasting: 5
I0512 13:58:15.620898       7 log.go:172] (0xc002b71810) Reply frame received for 5
I0512 13:58:15.678692       7 log.go:172] (0xc002b71810) Data frame received for 5
I0512 13:58:15.678755       7 log.go:172] (0xc002a710e0) (5) Data frame handling
I0512 13:58:15.678801       7 log.go:172] (0xc002b71810) Data frame received for 3
I0512 13:58:15.678865       7 log.go:172] (0xc002a71040) (3) Data frame handling
I0512 13:58:15.678912       7 log.go:172] (0xc002a71040) (3) Data frame sent
I0512 13:58:15.678947       7 log.go:172] (0xc002b71810) Data frame received for 3
I0512 13:58:15.678973       7 log.go:172] (0xc002a71040) (3) Data frame handling
I0512 13:58:15.680474       7 log.go:172] (0xc002b71810) Data frame received for 1
I0512 13:58:15.680490       7 log.go:172] (0xc002a70f00) (1) Data frame handling
I0512 13:58:15.680500       7 log.go:172] (0xc002a70f00) (1) Data frame sent
I0512 13:58:15.680509       7 log.go:172] (0xc002b71810) (0xc002a70f00) Stream removed, broadcasting: 1
I0512 13:58:15.680569       7 log.go:172] (0xc002b71810) (0xc002a70f00) Stream removed, broadcasting: 1
I0512 13:58:15.680578       7 log.go:172] (0xc002b71810) (0xc002a71040) Stream removed, broadcasting: 3
I0512 13:58:15.680893       7 log.go:172] (0xc002b71810) (0xc002a710e0) Stream removed, broadcasting: 5
I0512 13:58:15.680963       7 log.go:172] (0xc002b71810) Go away received
May 12 13:58:15.681: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:58:15.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-504" for this suite.

• [SLOW TEST:24.398 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":255,"skipped":4382,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:58:15.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
May 12 13:58:31.341: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May 12 13:58:31.483: INFO: Pod pod-with-prestop-exec-hook still exists
May 12 13:58:33.483: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May 12 13:58:33.525: INFO: Pod pod-with-prestop-exec-hook still exists
May 12 13:58:35.483: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May 12 13:58:35.651: INFO: Pod pod-with-prestop-exec-hook still exists
May 12 13:58:37.483: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May 12 13:58:37.699: INFO: Pod pod-with-prestop-exec-hook still exists
May 12 13:58:39.483: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May 12 13:58:39.711: INFO: Pod pod-with-prestop-exec-hook still exists
May 12 13:58:41.483: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May 12 13:58:41.488: INFO: Pod pod-with-prestop-exec-hook still exists
May 12 13:58:43.483: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May 12 13:58:43.487: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:58:43.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8756" for this suite.

• [SLOW TEST:27.809 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":256,"skipped":4414,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:58:43.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 12 13:58:43.559: INFO: Waiting up to 5m0s for pod "busybox-user-65534-e82c3016-091b-4eef-a460-281e5dd889ba" in namespace "security-context-test-4106" to be "Succeeded or Failed"
May 12 13:58:43.563: INFO: Pod "busybox-user-65534-e82c3016-091b-4eef-a460-281e5dd889ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.375191ms
May 12 13:58:45.568: INFO: Pod "busybox-user-65534-e82c3016-091b-4eef-a460-281e5dd889ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009452024s
May 12 13:58:47.571: INFO: Pod "busybox-user-65534-e82c3016-091b-4eef-a460-281e5dd889ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012061542s
May 12 13:58:49.668: INFO: Pod "busybox-user-65534-e82c3016-091b-4eef-a460-281e5dd889ba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109313666s
May 12 13:58:51.671: INFO: Pod "busybox-user-65534-e82c3016-091b-4eef-a460-281e5dd889ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.112405925s
May 12 13:58:51.671: INFO: Pod "busybox-user-65534-e82c3016-091b-4eef-a460-281e5dd889ba" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:58:51.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4106" for this suite.

• [SLOW TEST:8.179 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":257,"skipped":4442,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:58:51.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
May 12 13:58:52.282: INFO: Waiting up to 5m0s for pod "pod-eb6220d3-d03e-4856-94ee-3714d03408e5" in namespace "emptydir-7117" to be "Succeeded or Failed"
May 12 13:58:52.337: INFO: Pod "pod-eb6220d3-d03e-4856-94ee-3714d03408e5": Phase="Pending", Reason="", readiness=false. Elapsed: 55.521515ms
May 12 13:58:54.339: INFO: Pod "pod-eb6220d3-d03e-4856-94ee-3714d03408e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057821708s
May 12 13:58:57.020: INFO: Pod "pod-eb6220d3-d03e-4856-94ee-3714d03408e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.738289593s
May 12 13:58:59.124: INFO: Pod "pod-eb6220d3-d03e-4856-94ee-3714d03408e5": Phase="Running", Reason="", readiness=true. Elapsed: 6.842675033s
May 12 13:59:01.151: INFO: Pod "pod-eb6220d3-d03e-4856-94ee-3714d03408e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.869645344s
STEP: Saw pod success
May 12 13:59:01.151: INFO: Pod "pod-eb6220d3-d03e-4856-94ee-3714d03408e5" satisfied condition "Succeeded or Failed"
May 12 13:59:01.155: INFO: Trying to get logs from node kali-worker pod pod-eb6220d3-d03e-4856-94ee-3714d03408e5 container test-container: 
STEP: delete the pod
May 12 13:59:01.251: INFO: Waiting for pod pod-eb6220d3-d03e-4856-94ee-3714d03408e5 to disappear
May 12 13:59:01.271: INFO: Pod pod-eb6220d3-d03e-4856-94ee-3714d03408e5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:59:01.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7117" for this suite.

• [SLOW TEST:9.598 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":258,"skipped":4455,"failed":0}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:59:01.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
May 12 13:59:08.353: INFO: Successfully updated pod "labelsupdate0a0181b3-22dd-4a5a-b14f-a8605b5460a2"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:59:10.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4695" for this suite.

• [SLOW TEST:9.172 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":259,"skipped":4461,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:59:10.451: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
May 12 13:59:25.227: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May 12 13:59:25.260: INFO: Pod pod-with-poststart-exec-hook still exists
May 12 13:59:27.260: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May 12 13:59:27.264: INFO: Pod pod-with-poststart-exec-hook still exists
May 12 13:59:29.260: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May 12 13:59:29.263: INFO: Pod pod-with-poststart-exec-hook still exists
May 12 13:59:31.260: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May 12 13:59:31.298: INFO: Pod pod-with-poststart-exec-hook still exists
May 12 13:59:33.260: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May 12 13:59:33.264: INFO: Pod pod-with-poststart-exec-hook still exists
May 12 13:59:35.260: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May 12 13:59:35.383: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 13:59:35.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-295" for this suite.

• [SLOW TEST:24.945 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":260,"skipped":4508,"failed":0}
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 13:59:35.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-b2c902f2-68e6-411f-8519-6478b5b5617a
STEP: Creating configMap with name cm-test-opt-upd-0cedfa7f-bfb1-4bfc-98cd-83a903cd2d83
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-b2c902f2-68e6-411f-8519-6478b5b5617a
STEP: Updating configmap cm-test-opt-upd-0cedfa7f-bfb1-4bfc-98cd-83a903cd2d83
STEP: Creating configMap with name cm-test-opt-create-d6ebdf4a-5a31-48af-b2a1-4b2649a114fe
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 14:01:10.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6373" for this suite.

• [SLOW TEST:95.003 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":261,"skipped":4515,"failed":0}
SS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 14:01:10.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating secret secrets-5163/secret-test-d8de6967-bc99-4bfe-8922-9a9502367ad1
STEP: Creating a pod to test consume secrets
May 12 14:01:10.503: INFO: Waiting up to 5m0s for pod "pod-configmaps-f1aac656-a70e-4cc4-b7d2-9ce1b8d07684" in namespace "secrets-5163" to be "Succeeded or Failed"
May 12 14:01:10.622: INFO: Pod "pod-configmaps-f1aac656-a70e-4cc4-b7d2-9ce1b8d07684": Phase="Pending", Reason="", readiness=false. Elapsed: 119.042782ms
May 12 14:01:12.626: INFO: Pod "pod-configmaps-f1aac656-a70e-4cc4-b7d2-9ce1b8d07684": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122206611s
May 12 14:01:14.640: INFO: Pod "pod-configmaps-f1aac656-a70e-4cc4-b7d2-9ce1b8d07684": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137097465s
May 12 14:01:16.652: INFO: Pod "pod-configmaps-f1aac656-a70e-4cc4-b7d2-9ce1b8d07684": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.148702871s
STEP: Saw pod success
May 12 14:01:16.652: INFO: Pod "pod-configmaps-f1aac656-a70e-4cc4-b7d2-9ce1b8d07684" satisfied condition "Succeeded or Failed"
May 12 14:01:16.699: INFO: Trying to get logs from node kali-worker pod pod-configmaps-f1aac656-a70e-4cc4-b7d2-9ce1b8d07684 container env-test: 
STEP: delete the pod
May 12 14:01:17.167: INFO: Waiting for pod pod-configmaps-f1aac656-a70e-4cc4-b7d2-9ce1b8d07684 to disappear
May 12 14:01:17.171: INFO: Pod pod-configmaps-f1aac656-a70e-4cc4-b7d2-9ce1b8d07684 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 14:01:17.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5163" for this suite.

• [SLOW TEST:6.852 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":262,"skipped":4517,"failed":0}
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 14:01:17.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
May 12 14:01:26.115: INFO: Successfully updated pod "labelsupdate4cd4ad95-516a-4ac1-849a-61677d14abd5"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 14:01:28.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9826" for this suite.

• [SLOW TEST:11.030 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":263,"skipped":4517,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 14:01:28.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 14:01:32.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5174" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":264,"skipped":4527,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 14:01:32.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override all
May 12 14:01:32.947: INFO: Waiting up to 5m0s for pod "client-containers-209af0e1-506c-457d-b918-d647589a3483" in namespace "containers-968" to be "Succeeded or Failed"
May 12 14:01:32.964: INFO: Pod "client-containers-209af0e1-506c-457d-b918-d647589a3483": Phase="Pending", Reason="", readiness=false. Elapsed: 17.021494ms
May 12 14:01:34.967: INFO: Pod "client-containers-209af0e1-506c-457d-b918-d647589a3483": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019804652s
May 12 14:01:37.097: INFO: Pod "client-containers-209af0e1-506c-457d-b918-d647589a3483": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149589013s
May 12 14:01:39.101: INFO: Pod "client-containers-209af0e1-506c-457d-b918-d647589a3483": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.153845604s
STEP: Saw pod success
May 12 14:01:39.101: INFO: Pod "client-containers-209af0e1-506c-457d-b918-d647589a3483" satisfied condition "Succeeded or Failed"
May 12 14:01:39.104: INFO: Trying to get logs from node kali-worker2 pod client-containers-209af0e1-506c-457d-b918-d647589a3483 container test-container: 
STEP: delete the pod
May 12 14:01:39.168: INFO: Waiting for pod client-containers-209af0e1-506c-457d-b918-d647589a3483 to disappear
May 12 14:01:39.173: INFO: Pod client-containers-209af0e1-506c-457d-b918-d647589a3483 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 14:01:39.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-968" for this suite.

• [SLOW TEST:6.295 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":265,"skipped":4541,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 14:01:39.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 14:01:44.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1230" for this suite.

• [SLOW TEST:5.246 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":266,"skipped":4563,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 14:01:44.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-26b1c74a-b5b4-4f4e-ab07-71817cac1c56
STEP: Creating a pod to test consume secrets
May 12 14:01:44.509: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e54618d0-6310-4159-b3a5-7d2864e7489e" in namespace "projected-6745" to be "Succeeded or Failed"
May 12 14:01:44.514: INFO: Pod "pod-projected-secrets-e54618d0-6310-4159-b3a5-7d2864e7489e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.427139ms
May 12 14:01:46.518: INFO: Pod "pod-projected-secrets-e54618d0-6310-4159-b3a5-7d2864e7489e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008952354s
May 12 14:01:48.730: INFO: Pod "pod-projected-secrets-e54618d0-6310-4159-b3a5-7d2864e7489e": Phase="Running", Reason="", readiness=true. Elapsed: 4.220750418s
May 12 14:01:50.733: INFO: Pod "pod-projected-secrets-e54618d0-6310-4159-b3a5-7d2864e7489e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.223741418s
STEP: Saw pod success
May 12 14:01:50.733: INFO: Pod "pod-projected-secrets-e54618d0-6310-4159-b3a5-7d2864e7489e" satisfied condition "Succeeded or Failed"
May 12 14:01:50.735: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-e54618d0-6310-4159-b3a5-7d2864e7489e container projected-secret-volume-test: 
STEP: delete the pod
May 12 14:01:50.773: INFO: Waiting for pod pod-projected-secrets-e54618d0-6310-4159-b3a5-7d2864e7489e to disappear
May 12 14:01:50.855: INFO: Pod pod-projected-secrets-e54618d0-6310-4159-b3a5-7d2864e7489e no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 14:01:50.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6745" for this suite.

• [SLOW TEST:6.425 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":267,"skipped":4571,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 14:01:50.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 12 14:01:51.871: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 12 14:01:53.892: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888911, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888911, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888912, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888911, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 12 14:01:55.896: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888911, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888911, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888912, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724888911, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 12 14:01:58.919: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 14:01:59.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-164" for this suite.
STEP: Destroying namespace "webhook-164-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.594 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":268,"skipped":4573,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 14:02:00.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 12 14:02:01.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 14:02:07.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7454" for this suite.

• [SLOW TEST:7.284 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":269,"skipped":4580,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 14:02:07.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-4da8911d-4616-47e9-958e-8bfd2566b24d
STEP: Creating a pod to test consume configMaps
May 12 14:02:09.154: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6a3505c6-d7f3-4383-9bb8-9de76cb5f766" in namespace "projected-390" to be "Succeeded or Failed"
May 12 14:02:09.293: INFO: Pod "pod-projected-configmaps-6a3505c6-d7f3-4383-9bb8-9de76cb5f766": Phase="Pending", Reason="", readiness=false. Elapsed: 139.095406ms
May 12 14:02:11.354: INFO: Pod "pod-projected-configmaps-6a3505c6-d7f3-4383-9bb8-9de76cb5f766": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200220151s
May 12 14:02:13.756: INFO: Pod "pod-projected-configmaps-6a3505c6-d7f3-4383-9bb8-9de76cb5f766": Phase="Pending", Reason="", readiness=false. Elapsed: 4.601875431s
May 12 14:02:15.810: INFO: Pod "pod-projected-configmaps-6a3505c6-d7f3-4383-9bb8-9de76cb5f766": Phase="Running", Reason="", readiness=true. Elapsed: 6.656330461s
May 12 14:02:17.815: INFO: Pod "pod-projected-configmaps-6a3505c6-d7f3-4383-9bb8-9de76cb5f766": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.660696607s
STEP: Saw pod success
May 12 14:02:17.815: INFO: Pod "pod-projected-configmaps-6a3505c6-d7f3-4383-9bb8-9de76cb5f766" satisfied condition "Succeeded or Failed"
May 12 14:02:17.817: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-6a3505c6-d7f3-4383-9bb8-9de76cb5f766 container projected-configmap-volume-test: 
STEP: delete the pod
May 12 14:02:17.852: INFO: Waiting for pod pod-projected-configmaps-6a3505c6-d7f3-4383-9bb8-9de76cb5f766 to disappear
May 12 14:02:17.857: INFO: Pod pod-projected-configmaps-6a3505c6-d7f3-4383-9bb8-9de76cb5f766 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 14:02:17.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-390" for this suite.

• [SLOW TEST:10.125 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":270,"skipped":4614,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 14:02:17.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating server pod server in namespace prestop-2905
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-2905
STEP: Deleting pre-stop pod
May 12 14:02:33.350: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 14:02:33.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-2905" for this suite.

• [SLOW TEST:15.582 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":275,"completed":271,"skipped":4638,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 14:02:33.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service multi-endpoint-test in namespace services-1933
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1933 to expose endpoints map[]
May 12 14:02:34.130: INFO: Get endpoints failed (21.550802ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
May 12 14:02:35.168: INFO: successfully validated that service multi-endpoint-test in namespace services-1933 exposes endpoints map[] (1.059605271s elapsed)
STEP: Creating pod pod1 in namespace services-1933
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1933 to expose endpoints map[pod1:[100]]
May 12 14:02:39.798: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.585249578s elapsed, will retry)
May 12 14:02:41.816: INFO: successfully validated that service multi-endpoint-test in namespace services-1933 exposes endpoints map[pod1:[100]] (6.603092108s elapsed)
STEP: Creating pod pod2 in namespace services-1933
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1933 to expose endpoints map[pod1:[100] pod2:[101]]
May 12 14:02:46.583: INFO: Unexpected endpoints: found map[b6f4b154-a4c4-4ae5-960a-d572cf185445:[100]], expected map[pod1:[100] pod2:[101]] (4.72667565s elapsed, will retry)
May 12 14:02:48.692: INFO: successfully validated that service multi-endpoint-test in namespace services-1933 exposes endpoints map[pod1:[100] pod2:[101]] (6.835432227s elapsed)
STEP: Deleting pod pod1 in namespace services-1933
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1933 to expose endpoints map[pod2:[101]]
May 12 14:02:49.858: INFO: successfully validated that service multi-endpoint-test in namespace services-1933 exposes endpoints map[pod2:[101]] (1.163572769s elapsed)
STEP: Deleting pod pod2 in namespace services-1933
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1933 to expose endpoints map[]
May 12 14:02:50.999: INFO: successfully validated that service multi-endpoint-test in namespace services-1933 exposes endpoints map[] (1.136414614s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 14:02:51.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1933" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:18.037 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":275,"completed":272,"skipped":4656,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 14:02:51.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
May 12 14:02:51.729: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-529 /api/v1/namespaces/watch-529/configmaps/e2e-watch-test-resource-version fd20d866-f4e8-4dce-be48-968cf1496751 3747736 0 2020-05-12 14:02:51 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-05-12 14:02:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May 12 14:02:51.729: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-529 /api/v1/namespaces/watch-529/configmaps/e2e-watch-test-resource-version fd20d866-f4e8-4dce-be48-968cf1496751 3747737 0 2020-05-12 14:02:51 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-05-12 14:02:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 14:02:51.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-529" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":273,"skipped":4670,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 14:02:51.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 12 14:02:51.836: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a7de1e68-b083-43f2-8ced-17d18f96eed5" in namespace "projected-139" to be "Succeeded or Failed"
May 12 14:02:51.876: INFO: Pod "downwardapi-volume-a7de1e68-b083-43f2-8ced-17d18f96eed5": Phase="Pending", Reason="", readiness=false. Elapsed: 40.043279ms
May 12 14:02:54.062: INFO: Pod "downwardapi-volume-a7de1e68-b083-43f2-8ced-17d18f96eed5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225989429s
May 12 14:02:56.066: INFO: Pod "downwardapi-volume-a7de1e68-b083-43f2-8ced-17d18f96eed5": Phase="Running", Reason="", readiness=true. Elapsed: 4.229635148s
May 12 14:02:58.078: INFO: Pod "downwardapi-volume-a7de1e68-b083-43f2-8ced-17d18f96eed5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.241389518s
STEP: Saw pod success
May 12 14:02:58.078: INFO: Pod "downwardapi-volume-a7de1e68-b083-43f2-8ced-17d18f96eed5" satisfied condition "Succeeded or Failed"
May 12 14:02:58.079: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-a7de1e68-b083-43f2-8ced-17d18f96eed5 container client-container: 
STEP: delete the pod
May 12 14:02:58.117: INFO: Waiting for pod downwardapi-volume-a7de1e68-b083-43f2-8ced-17d18f96eed5 to disappear
May 12 14:02:58.127: INFO: Pod downwardapi-volume-a7de1e68-b083-43f2-8ced-17d18f96eed5 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 14:02:58.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-139" for this suite.

• [SLOW TEST:6.400 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":274,"skipped":4682,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 12 14:02:58.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
May 12 14:03:04.905: INFO: Successfully updated pod "annotationupdate80384bb1-a773-491a-bc11-8186289c59ae"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 12 14:03:07.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3298" for this suite.

• [SLOW TEST:9.233 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":275,"skipped":4683,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 12 14:03:07.369: INFO: Running AfterSuite actions on all nodes
May 12 14:03:07.369: INFO: Running AfterSuite actions on node 1
May 12 14:03:07.369: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0}

Ran 275 of 4992 Specs in 5662.987 seconds
SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped
PASS